Multimodal Chain-of-Thought Reasoning in Language Models
Abstract
Large language models (LLMs) have shown impressive performance on complex reasoning by leveraging chain-of-thought (CoT) prompting to generate intermediate reasoning chains as the rationale to infer the answer. However, existing CoT studies have primarily focused on the language modality. We propose Multimodal-CoT that incorporates language (text) and vision (images) modalities into a two-stage framework that separates rationale generation and answer inference. In this way, answer inference can leverage better generated rationales that are based on multimodal information. Experimental results on ScienceQA and A-OKVQA benchmark datasets show the effectiveness of our proposed approach. With Multimodal-CoT, our model under 1 billion parameters achieves state-of-the-art performance on the ScienceQA benchmark. Our analysis indicates that Multimodal-CoT offers the advantages of mitigating hallucination and enhancing convergence speed. Code is publicly available at https://github.com/amazon-science/mm-cot.
Cite
Text
Zhang et al. "Multimodal Chain-of-Thought Reasoning in Language Models." Transactions on Machine Learning Research, 2024.Markdown
[Zhang et al. "Multimodal Chain-of-Thought Reasoning in Language Models." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/zhang2024tmlr-multimodal/)BibTeX
@article{zhang2024tmlr-multimodal,
title = {{Multimodal Chain-of-Thought Reasoning in Language Models}},
author = {Zhang, Zhuosheng and Zhang, Aston and Li, Mu and Zhao, Hai and Karypis, George and Smola, Alex},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/zhang2024tmlr-multimodal/}
}