ErrorRadar: Benchmarking Complex Mathematical Reasoning of Multimodal Large Language Models via Error Detection
Abstract
As the field of Multimodal Large Language Models (MLLMs) continues to evolve, their potential to handle mathematical reasoning tasks is promising, as they can handle multimodal questions via cross-modal understanding capabilities compared to text-only LLMs. Current mathematical benchmarks predominantly focus on evaluating MLLMs' problem-solving ability, yet there is a crucial gap in addressing more complex scenarios such as error detection, for enhancing reasoning capability in complicated settings. To fill this gap, we formally formulate the new task — **multimodal error detection**, and introduce **ErrorRadar**, the **first benchmark designed to assess MLLMs' capabilities in such a task**. ErrorRadar evaluates two sub-tasks: error step identification and error categorization, providing a framework for evaluating MLLMs' complex mathematical reasoning ability. It consists of 2,500 high-quality multimodal K-12 mathematical problems, collected from real-world student interactions in an educational organization, with expert-based annotation and metadata such as problem type and error category. Through extensive experiments, we evaluated both open-source and closed-source representative MLLMs, benchmarking their performance against educational expert evaluators. Results indicate challenges still remain, as GPT-4o with best model performance is still around 10% behind human evaluation.
Cite
Text
Yan et al. "ErrorRadar: Benchmarking Complex Mathematical Reasoning of Multimodal Large Language Models via Error Detection." ICLR 2025 Workshops: LLM_Reason_and_Plan, 2025.Markdown
[Yan et al. "ErrorRadar: Benchmarking Complex Mathematical Reasoning of Multimodal Large Language Models via Error Detection." ICLR 2025 Workshops: LLM_Reason_and_Plan, 2025.](https://mlanthology.org/iclrw/2025/yan2025iclrw-errorradar/)BibTeX
@inproceedings{yan2025iclrw-errorradar,
title = {{ErrorRadar: Benchmarking Complex Mathematical Reasoning of Multimodal Large Language Models via Error Detection}},
author = {Yan, Yibo and Wang, Shen and Huo, Jiahao and Li, Hang and Li, Boyan and Su, Jiamin and Gao, Xiong and Zhang, YiFan and Xu, Tianlong and Chu, Zhendong and Zhong, Aoxiao and Wang, Kun and Xiong, Hui and Yu, Philip S. and Hu, Xuming and Wen, Qingsong},
booktitle = {ICLR 2025 Workshops: LLM_Reason_and_Plan},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/yan2025iclrw-errorradar/}
}