Scaling Laws for Native Multimodal Models

Abstract

Building general-purpose models that can effectively perceive the world through multimodal signals has been a long-standing goal. Current approaches involve integrating separately pre-trained components, such as connecting vision encoders to LLMs and continuing multimodal training. While such approaches exhibit remarkable sample efficiency, it remains an open question whether such late-fusion architectures are inherently superior. In this work, we revisit the architectural design of native multimodal models (NMMs)--those trained from the ground up on all modalities--and conduct an extensive scaling laws study, spanning 457 trained models with different architectures and training mixtures. Our investigation reveals no inherent advantage to late-fusion architectures over early-fusion ones, which do not rely on image encoders or tokenizers. On the contrary, early fusion exhibits stronger performance at lower parameter counts, is more efficient to train, and is easier to deploy. Motivated by the strong performance of the early-fusion architectures, we show that incorporating Mixture of Experts (MoEs) allows models to learn modality-specific weights, significantly benefiting performance.

Cite

Text

Shukor et al. "Scaling Laws for Native Multimodal Models." International Conference on Computer Vision, 2025.

Markdown

[Shukor et al. "Scaling Laws for Native Multimodal Models." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/shukor2025iccv-scaling/)

BibTeX

@inproceedings{shukor2025iccv-scaling,
  title     = {{Scaling Laws for Native Multimodal Models}},
  author    = {Shukor, Mustafa and Fini, Enrico and da Costa, Victor Guilherme Turrisi and Cord, Matthieu and Susskind, Joshua and El-Nouby, Alaaeldin},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {12-23},
  url       = {https://mlanthology.org/iccv/2025/shukor2025iccv-scaling/}
}