SVBench: A Benchmark with Temporal Multi-Turn Dialogues for Streaming Video Understanding
Abstract
Despite the significant advancements of Large Vision-Language Models (LVLMs) on established benchmarks, there remains a notable gap in suitable evaluation regarding their applicability in the emerging domain of long-context streaming video understanding. Current benchmarks for video understanding typically emphasize isolated single-instance text inputs and fail to evaluate the capacity to sustain temporal reasoning throughout the entire duration of video streams. To address these limitations, we introduce SVBench, a pioneering benchmark with temporal multi-turn question-answering chains specifically designed to thoroughly assess the capabilities of streaming video understanding of current LVLMs. We design a semi-automated annotation pipeline to obtain 49,979 Question-Answer (QA) pairs of 1,353 streaming videos, which includes generating QA chains that represent a series of consecutive multi-turn dialogues over video segments and constructing temporal linkages between successive QA chains. Our experimental results, obtained from 14 models in dialogue and streaming evaluations, reveal that while the closed-source GPT-4o outperforms others, most open-source LVLMs struggle with long-context streaming video understanding. We also construct a StreamingChat model, which significantly outperforms open-source LVLMs on our SVBench and achieves comparable performance on diverse vision-language benchmarks. We expect SVBench to advance the research of streaming video understanding by providing a comprehensive and in-depth analysis of current LVLMs. Our benchmark and model can be accessed at https://yzy-bupt.github.io/SVBench.
Cite
Text
Yang et al. "SVBench: A Benchmark with Temporal Multi-Turn Dialogues for Streaming Video Understanding." International Conference on Learning Representations, 2025.Markdown
[Yang et al. "SVBench: A Benchmark with Temporal Multi-Turn Dialogues for Streaming Video Understanding." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/yang2025iclr-svbench/)BibTeX
@inproceedings{yang2025iclr-svbench,
title = {{SVBench: A Benchmark with Temporal Multi-Turn Dialogues for Streaming Video Understanding}},
author = {Yang, Zhenyu and Hu, Yuhang and Du, Zemin and Xue, Dizhan and Qian, Shengsheng and Wu, Jiahong and Yang, Fan and Dong, Weiming and Xu, Changsheng},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/yang2025iclr-svbench/}
}