Accelerating Pre-Training of Multimodal LLMs via Chain-of-Sight

Abstract

This paper introduces Chain-of-Sight, a vision-language bridge module that accelerates the pre-training of Multimodal Large Language Models (MLLMs). Our approach employs a sequence of visual resamplers that capture visual details at various spacial scales.This architecture not only leverages global and local visual contexts effectively, but also facilitates the flexible extension of visual tokens through a compound token scaling strategy, allowing up to a 16x increase in the token count post pre-training.Consequently, Chain-of-Sight requires significantly fewer visual tokens in the pre-training phase compared to the fine-tuning phase. This intentional reduction of visual tokens during pre-training notably accelerates the pre-training process, cutting down the wall-clock training time by $\sim$73\%.Empirical results on a series of vision-language benchmarks reveal that the pre-train acceleration through Chain-of-Sight is achieved without sacrificing performance, matching or surpassing the standard pipeline of utilizing all visual tokens throughout the entire training process. Further scaling up the number of visual tokens for pre-training leads to stronger performances, competitive to existing approaches in a series of benchmarks.

Cite

Text

Huang et al. "Accelerating Pre-Training of Multimodal LLMs via Chain-of-Sight." Neural Information Processing Systems, 2024. doi:10.52202/079017-2409

Markdown

[Huang et al. "Accelerating Pre-Training of Multimodal LLMs via Chain-of-Sight." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/huang2024neurips-accelerating/) doi:10.52202/079017-2409

BibTeX

@inproceedings{huang2024neurips-accelerating,
  title     = {{Accelerating Pre-Training of Multimodal LLMs via Chain-of-Sight}},
  author    = {Huang, Ziyuan and Ji, Kaixiang and Gong, Biao and Qing, Zhiwu and Zhang, Qinglong and Zheng, Kecheng and Wang, Jian and Chen, Jingdong and Yang, Ming},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-2409},
  url       = {https://mlanthology.org/neurips/2024/huang2024neurips-accelerating/}
}