Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers

Abstract

State-of-the-art transformer-based large multimodal models (LMMs) struggle to handle hour-long video inputs due to the quadratic complexity of the causal self-attention operations, leading to high computational costs during training and inference. Existing token compression-based methods reduce the number of video tokens but often incur information loss and remain inefficient for extremely long sequences. In this paper, we explore an orthogonal direction to build a hybrid Mamba-Transformer model (VAMBA) that employs Mamba-2 blocks to encode video tokens with linear complexity. Without any token reduction, VAMBA can encode more than 1024 frames (640x360) on a single GPU, while transformer-based models can only encode 256 frames. On long video input, VAMBA achieves at least 50% reduction in GPU memory usage during training and inference, and nearly doubles the speed per training step compared to transformer-based LMMs. Our experimental results demonstrate that VAMBA improves accuracy by 4.3% on the challenging hour-long video understanding benchmark LVBench over prior efficient video LMMs, and maintains strong performance on a broad spectrum of long and short video understanding tasks.

Cite

Text

Ren et al. "Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers." International Conference on Computer Vision, 2025.

Markdown

[Ren et al. "Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/ren2025iccv-vamba/)

BibTeX

@inproceedings{ren2025iccv-vamba,
  title     = {{Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers}},
  author    = {Ren, Weiming and Ma, Wentao and Yang, Huan and Wei, Cong and Zhang, Ge and Chen, Wenhu},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {21197-21208},
  url       = {https://mlanthology.org/iccv/2025/ren2025iccv-vamba/}
}