Modularized Self-Reflected Video Reasoner for Multimodal LLM with Application to Video Question Answering
Abstract
Multimodal Large Language Models (Multimodal LLMs) have shown their strength in Video Question Answering (VideoQA). However, due to the black-box nature of end-to-end training strategies, existing approaches based on Multimodal LLMs suffer from the lack of interpretability for VideoQA: they can neither present reasoning paths nor indicate where the answers are derived from the video. To address this issue, we propose MSR-ViR (Modularized Self-Reflected Video Reasoner), which for the first time integrates modular networks to Multimodal LLMs, capable of providing VideoQA with explicit reasoning paths for more interpretability. Specifically, a MoST-Grounding (Modularized Spatial-Temporal Grounding) network is proposed to decompose complex questions via tree-structured policies, localizing relevant temporal and spatial segments within videos through step-by-step reasoning. The proposed MoST-Grounding network provides explicit visually grounded information for Multimodal LLMs with clear reasoning paths, thus enhancing interpretability for the predicted answers. To further improve the reasoning quality, we design an Alternate Self-reflection Training Strategy to jointly optimize policy generation and Multimodal LLMs. Experiments on real-world datasets demonstrate the superiority of our proposed MSR-ViR framework in video understanding, reasoning transparency, and providing explicit localization evidence for answers.
Cite
Text
Song et al. "Modularized Self-Reflected Video Reasoner for Multimodal LLM with Application to Video Question Answering." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Song et al. "Modularized Self-Reflected Video Reasoner for Multimodal LLM with Application to Video Question Answering." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/song2025icml-modularized/)BibTeX
@inproceedings{song2025icml-modularized,
title = {{Modularized Self-Reflected Video Reasoner for Multimodal LLM with Application to Video Question Answering}},
author = {Song, Zihan and Wang, Xin and Qian, Zi and Chen, Hong and Huang, Longtao and Xue, Hui and Zhu, Wenwu},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {56389-56413},
volume = {267},
url = {https://mlanthology.org/icml/2025/song2025icml-modularized/}
}