MMCode: Benchmarking Multimodal Large Language Models in Code Generation with Visually Rich Programming Problems

Abstract

Programming often involves converting detailed and complex specifications into code, a process during which developers typically utilize visual aids to more effectively convey concepts. While recent developments in Large Multimodal Models have demonstrated remarkable abilities in visual reasoning and mathematical tasks, there is little work on investigating whether these models can effectively interpret visual elements for code generation. To this end, we present MMCode, the first multi-modal coding dataset for evaluating algorithmic problem-solving skills in visually rich contexts. MMCode contains 3,548 questions and 6,620 images collected from real-world programming challenges harvested from 10 code competition websites, presenting significant challenges due to the extreme demand for reasoning abilities. Our experiment results show that current state-of-the-art models struggle to solve these problems. The results highlight the lack of powerful vision-code models, and we hope MMCode can serve as an inspiration for future works in this domain. The data and code are publicly available.

Cite

Text

Li et al. "MMCode: Benchmarking Multimodal Large Language Models in Code Generation with Visually Rich Programming Problems." ICLR 2025 Workshops: LLM_Reason_and_Plan, 2025.

Markdown

[Li et al. "MMCode: Benchmarking Multimodal Large Language Models in Code Generation with Visually Rich Programming Problems." ICLR 2025 Workshops: LLM_Reason_and_Plan, 2025.](https://mlanthology.org/iclrw/2025/li2025iclrw-mmcode/)

BibTeX

@inproceedings{li2025iclrw-mmcode,
  title     = {{MMCode: Benchmarking Multimodal Large Language Models in Code Generation with Visually Rich Programming Problems}},
  author    = {Li, Kaixin and Tian, Yuchen and Hu, Qisheng and Luo, Ziyang and Huang, Zhiyong and Ma, Jing},
  booktitle = {ICLR 2025 Workshops: LLM_Reason_and_Plan},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/li2025iclrw-mmcode/}
}