LLaVA-CoT: Let Vision Language Models Reason Step-by-Step
Abstract
Large language models have demonstrated substantial advancements in reasoning capabilities. However, current Vision-Language Models (VLMs) often struggle to perform systematic and structured reasoning, especially when handling complex visual question-answering tasks. In this work, we introduce LLaVA-CoT, a large VLM designed to conduct autonomous multistage reasoning. Unlike chain-of-thought prompting, LLaVA-CoT independently engages in sequential stages of summarization, visual interpretation, logical reasoning, and conclusion generation. This structured approach enables LLaVA-CoT to achieve marked improvements on reasoning-intensive tasks. To accomplish this, we construct the LLaVA-CoT-100k dataset, integrating samples from various visual question answering sources and providing structured reasoning annotations. Besides, we propose a test-time stage-wise retracing search method (SWIRES), which enables effective and efficient test-time scaling. Remarkably, with only 100k training samples and test-time scaling, LLaVA-CoT not only outperforms its base model by 9.4% on a wide range of multimodal reasoning benchmarks, but also surpasses the performance of larger and even closed-source models, such as Gemini-1.5-pro, GPT-4o-mini, and Llama-3.2-90B-Vision-Instruct. The code, dataset, and pre-trained weights are publicly available at https://github.com/PKU-YuanGroup/LLaVA-CoT.
Cite
Text
Xu et al. "LLaVA-CoT: Let Vision Language Models Reason Step-by-Step." International Conference on Computer Vision, 2025.Markdown
[Xu et al. "LLaVA-CoT: Let Vision Language Models Reason Step-by-Step." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/xu2025iccv-llavacot/)BibTeX
@inproceedings{xu2025iccv-llavacot,
title = {{LLaVA-CoT: Let Vision Language Models Reason Step-by-Step}},
author = {Xu, Guowei and Jin, Peng and Wu, Ziang and Li, Hao and Song, Yibing and Sun, Lichao and Yuan, Li},
booktitle = {International Conference on Computer Vision},
year = {2025},
pages = {2087-2098},
url = {https://mlanthology.org/iccv/2025/xu2025iccv-llavacot/}
}