Scaling Down Text Encoders of Text-to-Image Diffusion Models

Abstract

Text encoders in diffusion models have rapidly evolved, transitioning from CLIP to T5-XXL. Although this evolution has significantly enhanced the models' ability to understand complex prompts and generate text, it also leads to a substantial increase in the number of parameters. Despite T5 series encoders being trained on the C4 natural language corpus, which includes a significant amount of non-visual data, diffusion models with T5 encoder do not respond to those non-visual prompts, indicating redundancy in representational power. Therefore, it raises an important question: "Do we really need such a large text encoder?" In pursuit of an answer, we employ vision-based knowledge distillation to train a series of T5 encoder models. To fully inherit its capabilities, we constructed our dataset based on three criteria: image quality, semantic understanding, and text-rendering. Our results demonstrate the scaling down pattern that the distilled T5-base model can generate images of comparable quality to those produced by T5-XXL, while being 50 times smaller in size. This reduction in model size significantly lowers the GPU requirements for running state-of-the-art models such as FLUX and SD3, making high-quality text-to-image generation more accessible.

Cite

Text

Wang et al. "Scaling Down Text Encoders of Text-to-Image Diffusion Models." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.01717

Markdown

[Wang et al. "Scaling Down Text Encoders of Text-to-Image Diffusion Models." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/wang2025cvpr-scaling-a/) doi:10.1109/CVPR52734.2025.01717

BibTeX

@inproceedings{wang2025cvpr-scaling-a,
  title     = {{Scaling Down Text Encoders of Text-to-Image Diffusion Models}},
  author    = {Wang, Lifu and Liu, Daqing and Liu, Xinchen and He, Xiaodong},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2025},
  pages     = {18424-18433},
  doi       = {10.1109/CVPR52734.2025.01717},
  url       = {https://mlanthology.org/cvpr/2025/wang2025cvpr-scaling-a/}
}