MM-OPERA: Benchmarking Open-Ended Association Reasoning for Large Vision-Language Models
Abstract
Large Vision-Language Models (LVLMs) have exhibited remarkable progress. However, deficiencies remain compared to human intelligence, such as hallucination and shallow pattern matching. In this work, we aim to evaluate a fundamental yet underexplored intelligence: association, a cornerstone of human cognition for creative thinking and knowledge integration. Current benchmarks, often limited to closed-ended tasks, fail to capture the complexity of open-ended association reasoning vital for real-world applications. To address this, we present MM-OPERA, a systematic benchmark with 11,497 instances across two open-ended tasks: Remote-Item Association (RIA) and In-Context Association (ICA), aligning association intelligence evaluation with human psychometric principles. It challenges LVLMs to resemble the spirit of divergent thinking and convergent associative reasoning through free-form responses and explicit reasoning paths. We deploy tailored LLM-as-a-Judge strategies to evaluate open-ended outputs, applying process-reward-informed judgment to dissect reasoning with precision. Extensive empirical studies on state-of-the-art LVLMs, including sensitivity analysis of task instances, validity analysis of LLM-as-a-Judge strategies, and diversity analysis across abilities, domains, languages, cultures, etc., provide a comprehensive and nuanced understanding of the limitations of current LVLMs in associative reasoning, paving the way for more human-like and general-purpose AI. The dataset and code are available at https://github.com/MM-OPERA-Bench/MM-OPERA.
Cite
Text
Huang et al. "MM-OPERA: Benchmarking Open-Ended Association Reasoning for Large Vision-Language Models." Advances in Neural Information Processing Systems, 2025.Markdown
[Huang et al. "MM-OPERA: Benchmarking Open-Ended Association Reasoning for Large Vision-Language Models." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/huang2025neurips-mmopera/)BibTeX
@inproceedings{huang2025neurips-mmopera,
title = {{MM-OPERA: Benchmarking Open-Ended Association Reasoning for Large Vision-Language Models}},
author = {Huang, Zimeng and Ke, Jinxin and Fan, Xiaoxuan and Yang, Yufeng and Liu, Yang and Zhonghan, Liu and Wang, Zedi and Dai, Junteng and Jiang, Haoyi and Zhou, Yuyu and Wang, Keze and Chen, Ziliang},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/huang2025neurips-mmopera/}
}