What if We Recaption Billions of Web Images with Llama-3?
Abstract
Web-crawled image-text pairs are inherently noisy. Prior studies demonstrate that semantically aligning and enriching textual descriptions of these pairs can significantly enhance model training across various vision-language tasks, particularly text-to-image generation. However, large-scale investigations in this area remain predominantly closed-source. Our paper aims to bridge this community effort, leveraging the powerful and $\textit{open-sourced}$ LLaMA-3, a GPT-4 level LLM. Our recaptioning pipeline is simple: first, we fine-tune a LLaMA-3-8B powered LLaVA-1.5 and then employ it to recaption 1.3 billion images from the DataComp-1B dataset. Our empirical results confirm that this enhanced dataset, Recap-DataComp-1B, offers substantial benefits in training advanced vision-language models. For discriminative models like CLIP, we observe an average of 3.1% enhanced zero-shot performance cross four cross-modal retrieval tasks using a mixed set of the original and our captions. For generative models like text-to-image Diffusion Transformers, the generated images exhibit a significant improvement in alignment with users’ text instructions, especially in following complex queries. Our project page is https://www.haqtu.me/Recap-Datacomp-1B/.
Cite
Text
Li et al. "What if We Recaption Billions of Web Images with Llama-3?." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Li et al. "What if We Recaption Billions of Web Images with Llama-3?." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/li2025icml-we/)BibTeX
@inproceedings{li2025icml-we,
title = {{What if We Recaption Billions of Web Images with Llama-3?}},
author = {Li, Xianhang and Tu, Haoqin and Hui, Mude and Wang, Zeyu and Zhao, Bingchen and Xiao, Junfei and Ren, Sucheng and Mei, Jieru and Liu, Qing and Zheng, Huangjie and Zhou, Yuyin and Xie, Cihang},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {35957-35976},
volume = {267},
url = {https://mlanthology.org/icml/2025/li2025icml-we/}
}