Multimodal Autoregressive Pre-Training of Large Vision Encoders

Abstract

We introduce a novel method for pre-training of large-scale vision encoders. Building on recent advancements in autoregressive pre-training of vision models, we extend this framework to a multimodal setting, i.e., images and text. In this paper, we present AIMV2, a family of generalist vision encoders characterized by a straightforward pre-training process, scalability, and remarkable performance across a range of downstream tasks. This is achieved by pairing the vision encoder with a multimodal decoder that autoregressively generates raw image patches and text tokens. Our encoders excel not only in multimodal evaluations but also in vision benchmarks such as localization, grounding, and classification. Notably, our AIMV2-3B encoder achieves 89.5% accuracy on ImageNet-1k with a frozen trunk. Fur- thermore, AIMV2 consistently outperforms state-of-the-art contrastive models (e.g., CLIP, SigLIP) in multimodal im- age understanding across diverse settings.

Cite

Text

Fini et al. "Multimodal Autoregressive Pre-Training of Large Vision Encoders." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.00901

Markdown

[Fini et al. "Multimodal Autoregressive Pre-Training of Large Vision Encoders." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/fini2025cvpr-multimodal/) doi:10.1109/CVPR52734.2025.00901

BibTeX

@inproceedings{fini2025cvpr-multimodal,
  title     = {{Multimodal Autoregressive Pre-Training of Large Vision Encoders}},
  author    = {Fini, Enrico and Shukor, Mustafa and Li, Xiujun and Dufter, Philipp and Klein, Michal and Haldimann, David and Aitharaju, Sai and da Costa, Victor G. Turrisi and Béthune, Louis and Gan, Zhe and Toshev, Alexander and Eichner, Marcin and Nabi, Moin and Yang, Yinfei and Susskind, Joshua and El-Nouby, Alaaeldin},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2025},
  pages     = {9641-9654},
  doi       = {10.1109/CVPR52734.2025.00901},
  url       = {https://mlanthology.org/cvpr/2025/fini2025cvpr-multimodal/}
}