CoMT: A Novel Benchmark for Chain of Multi-Modal Thought on Large Vision-Language Models

Cite

Text

Cheng et al. "CoMT: A Novel Benchmark for Chain of Multi-Modal Thought on Large Vision-Language Models." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I22.34538

Markdown

[Cheng et al. "CoMT: A Novel Benchmark for Chain of Multi-Modal Thought on Large Vision-Language Models." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/cheng2025aaai-comt/) doi:10.1609/AAAI.V39I22.34538

BibTeX

@inproceedings{cheng2025aaai-comt,
  title     = {{CoMT: A Novel Benchmark for Chain of Multi-Modal Thought on Large Vision-Language Models}},
  author    = {Cheng, Zihui and Chen, Qiguang and Zhang, Jin and Fei, Hao and Feng, Xiaocheng and Che, Wanxiang and Li, Min and Qin, Libo},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {23678-23686},
  doi       = {10.1609/AAAI.V39I22.34538},
  url       = {https://mlanthology.org/aaai/2025/cheng2025aaai-comt/}
}