MMBench-Video: A Long-Form Multi-Shot Benchmark for Holistic Video Understanding

Abstract

The advent of large vision-language models (LVLMs) has spurred research into their applications in multi-modal contexts, particularly in video understanding. Traditional VideoQA benchmarks, despite providing quantitative metrics, often fail to encompass the full spectrum of video content and inadequately assess models' temporal comprehension. To address these limitations, we introduce MMBench-Video, a quantitative benchmark designed to rigorously evaluate LVLMs' proficiency in video understanding. MMBench-Video incorporates lengthy videos from YouTube and employs free-form questions, mirroring practical use cases. The benchmark is meticulously crafted to probe the models' temporal reasoning skills, with all questions human-annotated according to a carefully constructed ability taxonomy.We employ GPT-4 for automated assessment, demonstrating superior accuracy and robustness over earlier LLM-based evaluations. Utilizing MMBench-Video, we have conducted comprehensive evaluations that include both proprietary and open-source LVLMs for images and videos. MMBench-Video stands as a valuable resource for the research community, facilitating improved evaluation of LVLMs and catalyzing progress in the field of video understanding.

Cite

Text

Fang et al. "MMBench-Video: A Long-Form Multi-Shot Benchmark for Holistic Video Understanding." Neural Information Processing Systems, 2024. doi:10.52202/079017-2827

Markdown

[Fang et al. "MMBench-Video: A Long-Form Multi-Shot Benchmark for Holistic Video Understanding." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/fang2024neurips-mmbenchvideo/) doi:10.52202/079017-2827

BibTeX

@inproceedings{fang2024neurips-mmbenchvideo,
  title     = {{MMBench-Video: A Long-Form Multi-Shot Benchmark for Holistic Video Understanding}},
  author    = {Fang, Xinyu and Mao, Kangrui and Duan, Haodong and Zhao, Xiangyu and Li, Yining and Lin, Dahua and Chen, Kai},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-2827},
  url       = {https://mlanthology.org/neurips/2024/fang2024neurips-mmbenchvideo/}
}