MMIU: Multimodal Multi-Image Understanding for Evaluating Large Vision-Language Models
Abstract
The capability to process multiple images is crucial for Large Vision-Language Models (LVLMs) to develop a more thorough and nuanced understanding of a scene. Recent multi-image LVLMs have begun to address this need. However, their evaluation has not kept pace with their development. To fill this gap, we introduce the Multimodal Multi-image Understanding (MMIU) benchmark, a comprehensive evaluation suite designed to assess LVLMs across a wide range of multi-image tasks. MMIU encompasses 7 types of multi-image relationships, 52 tasks, 77K images, and 11K meticulously curated multiple-choice questions, making it the most extensive benchmark of its kind. Our evaluation of nearly 30 popular LVLMs, including both open-source and proprietary models, reveals significant challenges in multi-image comprehension, particularly in tasks involving spatial understanding. Even the most advanced models, such as GPT-4o, achieve only 55.7\% accuracy on MMIU. Through multi-faceted analytical experiments, we identify key performance gaps and limitations, providing valuable insights for future model and data improvements. We aim for MMIU to advance the frontier of LVLM research and development. We release the data and code at https://github.com/MMIUBenchmark/MMIU.
Cite
Text
Meng et al. "MMIU: Multimodal Multi-Image Understanding for Evaluating Large Vision-Language Models." International Conference on Learning Representations, 2025.Markdown
[Meng et al. "MMIU: Multimodal Multi-Image Understanding for Evaluating Large Vision-Language Models." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/meng2025iclr-mmiu/)BibTeX
@inproceedings{meng2025iclr-mmiu,
title = {{MMIU: Multimodal Multi-Image Understanding for Evaluating Large Vision-Language Models}},
author = {Meng, Fanqing and Wang, Jin and Li, Chuanhao and Lu, Quanfeng and Tian, Hao and Yang, Tianshuo and Liao, Jiaqi and Zhu, Xizhou and Dai, Jifeng and Qiao, Yu and Luo, Ping and Zhang, Kaipeng and Shao, Wenqi},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/meng2025iclr-mmiu/}
}