A Comprehensive Study of Decoder-Only LLMs for Text-to-Image Generation

Abstract

Both text-to-image generation and large language models (LLMs) have made significant advancements. However, many text-to-image models still employ the somewhat outdated T5 and CLIP as their text encoders. In this work, we investigate the effectiveness of using modern decoder-only LLMs as text encoders for text-to-image diffusion models. We build a standardized training and evaluation pipeline that allows us to isolate and evaluate the effect of different text embeddings. We train a total of 27 text-to-image models with 12 different text encoders to analyze the critical aspects of LLMs that could impact text-to-image generation, including the approaches to extract embeddings, different LLMs variants, and model sizes. Our experiments reveal that the de facto way of using last-layer embeddings as conditioning leads to inferior performance. Instead, we explore embeddings from various layers and find that using layer-normalized averaging across all layers significantly improves alignment with complex prompts. Most LLMs with this conditioning outperform the baseline T5 model, showing enhanced performance in advanced visio-linguistic reasoning skills.

Cite

Text

Wang et al. "A Comprehensive Study of Decoder-Only LLMs for Text-to-Image Generation." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.02661

Markdown

[Wang et al. "A Comprehensive Study of Decoder-Only LLMs for Text-to-Image Generation." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/wang2025cvpr-comprehensive/) doi:10.1109/CVPR52734.2025.02661

BibTeX

@inproceedings{wang2025cvpr-comprehensive,
  title     = {{A Comprehensive Study of Decoder-Only LLMs for Text-to-Image Generation}},
  author    = {Wang, Andrew Z. and Ge, Songwei and Karras, Tero and Liu, Ming-Yu and Balaji, Yogesh},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2025},
  pages     = {28575-28585},
  doi       = {10.1109/CVPR52734.2025.02661},
  url       = {https://mlanthology.org/cvpr/2025/wang2025cvpr-comprehensive/}
}