An Empirical Study and Analysis of Text-to-Image Generation Using Large Language Model-Powered Textual Representation

Abstract

One critical prerequisite for faithful text-to-image generation is the accurate understanding of text inputs. Existing methods leverage the text encoder of the CLIP model to represent input prompts. However, the pre-trained CLIP model can merely encode English with a maximum token length of 77. Moreover, the model capacity of the text encoder from CLIP is relatively limited compared to Large Language Models (LLMs), which offer multilingual input, accommodate longer context, and achieve superior text representation. In this paper, we investigate LLMs as the text encoder to improve the language understanding in text-to-image generation. Unfortunately, training text-to-image generative model with LLMs from scratch demands significant computational resources and data. To this end, we introduce a three-stage training pipeline, OmniDiffusion, that effectively and efficiently integrates the existing text-to-image model with LLMs. Specifically, we propose a lightweight adapter that enables fast training of the text-to-image model using the textual representations from LLMs. Extensive experiments demonstrate that our model supports not only multilingual but also longer input context with superior image generation quality. Project page: https://llm-conditioned-diffusion.github.io

Cite

Text

Tan et al. "An Empirical Study and Analysis of Text-to-Image Generation Using Large Language Model-Powered Textual Representation." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72989-8_27

Markdown

[Tan et al. "An Empirical Study and Analysis of Text-to-Image Generation Using Large Language Model-Powered Textual Representation." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/tan2024eccv-empirical/) doi:10.1007/978-3-031-72989-8_27

BibTeX

@inproceedings{tan2024eccv-empirical,
  title     = {{An Empirical Study and Analysis of Text-to-Image Generation Using Large Language Model-Powered Textual Representation}},
  author    = {Tan, Zhiyu and Yang, Mengping and Qin, Luozheng and Yang, Hao and Qian, Ye and Zhou, Qiang and Zhang, Cheng and Li, Hao},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
  doi       = {10.1007/978-3-031-72989-8_27},
  url       = {https://mlanthology.org/eccv/2024/tan2024eccv-empirical/}
}