LongVILA: Scaling Long-Context Visual Language Models for Long Videos
Abstract
Long-context capability is critical for multi-modal foundation models, especially for long video understanding. We introduce LongVILA, a full-stack solution for long-context visual-language models by co-designing the algorithm and system. For model training, we upgrade existing VLMs to support long video understanding by incorporating two additional stages, i.e., long context extension and long video supervised fine-tuning. However, training on long video is computationally and memory intensive. We introduce the long-context Multi-Modal Sequence Parallelism (MM-SP) system that efficiently parallelizes long video training and inference, enabling 2M context length training on 256 GPUs without any gradient checkpointing. LongVILA efficiently extends the number of video frames of VILA from 8 to 2048, achieving 99.8% accuracy in 6,000-frame (more than 1 million tokens) video needle-in-a-haystack. LongVILA-7B demonstrates strong accuracy on 9 popular video benchmarks, e.g., 65.1% VideoMME with subtitle. Besides, MM-SP is 2.1x - 5.7x faster than ring style sequence parallelism and 1.1x - 1.4x faster than Megatron with a hybrid context and tensor parallelism. Moreover, it seamlessly integrates with Hugging Face Transformers.
Cite
Text
Chen et al. "LongVILA: Scaling Long-Context Visual Language Models for Long Videos." International Conference on Learning Representations, 2025.Markdown
[Chen et al. "LongVILA: Scaling Long-Context Visual Language Models for Long Videos." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/chen2025iclr-longvila/)BibTeX
@inproceedings{chen2025iclr-longvila,
title = {{LongVILA: Scaling Long-Context Visual Language Models for Long Videos}},
author = {Chen, Yukang and Xue, Fuzhao and Li, Dacheng and Hu, Qinghao and Zhu, Ligeng and Li, Xiuyu and Fang, Yunhao and Tang, Haotian and Yang, Shang and Liu, Zhijian and He, Yihui and Yin, Hongxu and Molchanov, Pavlo and Kautz, Jan and Fan, Linxi and Zhu, Yuke and Lu, Yao and Han, Song},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/chen2025iclr-longvila/}
}