Unleashing Hour-Scale Video Training for Long Video-Language Understanding
Abstract
Recent long-form video-language understanding benchmarks have driven progress in video large multimodal models (Video-LMMs). However, the scarcity of well-annotated long videos has left the training of hour-long Video-LMMs underexplored. To close this gap, we present VideoMarathon, a large-scale hour-long video instruction-following dataset. This dataset includes around 9,700 hours of long videos sourced from diverse domains, ranging from 3 to 60 minutes per video. Specifically, it contains 3.3M high-quality QA pairs, spanning six fundamental topics: temporality, spatiality, object, action, scene, and event. Compared to existing video instruction datasets, VideoMarathon significantly extends training video durations up to 1 hour, and supports 22 diverse tasks requiring both short- and long-term video comprehension. Building on VideoMarathon, we propose Hour-LLaVA, a powerful and efficient Video-LMM for hour-scale video-language modeling. It enables hour-long video training and inference at 1-FPS sampling by leveraging a memory augmentation module, which adaptively integrates question-relevant and spatiotemporally informative semantics from the cached full video context. In our experiments, Hour-LLaVA achieves the best performance on multiple representative long video-language benchmarks, demonstrating the high quality of the VideoMarathon dataset and the superiority of the Hour-LLaVA model.
Cite
Text
Lin et al. "Unleashing Hour-Scale Video Training for Long Video-Language Understanding." Advances in Neural Information Processing Systems, 2025.Markdown
[Lin et al. "Unleashing Hour-Scale Video Training for Long Video-Language Understanding." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/lin2025neurips-unleashing/)BibTeX
@inproceedings{lin2025neurips-unleashing,
title = {{Unleashing Hour-Scale Video Training for Long Video-Language Understanding}},
author = {Lin, Jingyang and Wu, Jialian and Sun, Ximeng and Wang, Ze and Liu, Jiang and Su, Yusheng and Yu, Xiaodong and Chen, Hao and Luo, Jiebo and Liu, Zicheng and Barsoum, Emad},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/lin2025neurips-unleashing/}
}