Understanding Long Videos with Multimodal Language Models

Abstract

Large Language Models (LLMs) have allowed recent LLM-based approaches to achieve excellent performance on long-video understanding benchmarks. We investigate how extensive world knowledge and strong reasoning skills of underlying LLMs influence this strong performance. Surprisingly, we discover that LLM-based approaches can yield surprisingly good accuracy on long-video tasks with limited video information, sometimes even with no video-specific information. Building on this, we explore injecting video-specific information into an LLM-based framework. We utilize off-the-shelf vision tools to extract three object-centric information modalities from videos, and then leverage natural language as a medium for fusing this information. Our resulting Multimodal Video Understanding (MVU) framework demonstrates state-of-the-art performance across multiple video understanding benchmarks. Strong performance also on robotics domain tasks establishes its strong generality. Code: github.com/kahnchana/mvu

Cite

Text

Ranasinghe et al. "Understanding Long Videos with Multimodal Language Models." International Conference on Learning Representations, 2025.

Markdown

[Ranasinghe et al. "Understanding Long Videos with Multimodal Language Models." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/ranasinghe2025iclr-understanding/)

BibTeX

@inproceedings{ranasinghe2025iclr-understanding,
  title     = {{Understanding Long Videos with Multimodal Language Models}},
  author    = {Ranasinghe, Kanchana and Li, Xiang and Kahatapitiya, Kumara and Ryoo, Michael S},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/ranasinghe2025iclr-understanding/}
}