Enhancing Video-LLM Reasoning via Agent-of-Thoughts Distillation

Abstract

This paper tackles the problem of video question answering (VideoQA), a task that often requires multi-step reasoning and a profound understanding of spatial-temporal dynamics. While large video-language models perform well on benchmarks, they often lack explainability and spatial-temporal grounding. In this paper, we propose **A**gent-**o**f-**T**houghts **D**istillation (**AoTD**), a method that enhances models by incorporating automatically generated Chain-of-Thoughts (CoTs) into the instruction-tuning process. Specifically, we leverage an agent-based system to decompose complex questions into sub-tasks, and address them with specialized vision models, the intermediate results are then treated as reasoning chains. We also introduce a verification mechanism using a large language model (LLM) to ensure the reliability of generated CoTs. Extensive experiments demonstrate that AoTD improves the performance on multiple-choice and open-ended benchmarks.

Cite

Text

Shi et al. "Enhancing Video-LLM Reasoning via Agent-of-Thoughts Distillation." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.00797

Markdown

[Shi et al. "Enhancing Video-LLM Reasoning via Agent-of-Thoughts Distillation." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/shi2025cvpr-enhancing/) doi:10.1109/CVPR52734.2025.00797

BibTeX

@inproceedings{shi2025cvpr-enhancing,
  title     = {{Enhancing Video-LLM Reasoning via Agent-of-Thoughts Distillation}},
  author    = {Shi, Yudi and Di, Shangzhe and Chen, Qirui and Xie, Weidi},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2025},
  pages     = {8523-8533},
  doi       = {10.1109/CVPR52734.2025.00797},
  url       = {https://mlanthology.org/cvpr/2025/shi2025cvpr-enhancing/}
}