OlympicArena: Benchmarking Multi-Discipline Cognitive Reasoning for Superintelligent AI

Abstract

The evolution of Artificial Intelligence (AI) has been significantly accelerated by advancements in Large Language Models (LLMs) and Large Multimodal Models (LMMs), gradually showcasing potential cognitive reasoning abilities in problem-solving and scientific discovery (i.e., AI4Science) once exclusive to human intellect. To comprehensively evaluate current models' performance in cognitive reasoning abilities, we introduce OlympicArena, which includes 11,163 bilingual problems across both text-only and interleaved text-image modalities. These challenges encompass a wide range of disciplines spanning seven fields and 62 international Olympic competitions, rigorously examined for data leakage. We argue that the challenges in Olympic competition problems are ideal for evaluating AI's cognitive reasoning due to their complexity and interdisciplinary nature, which are essential for tackling complex scientific challenges and facilitating discoveries. Beyond evaluating performance across various disciplines using answer-only criteria, we conduct detailed experiments and analyses from multiple perspectives. We delve into the models' cognitive reasoning abilities, their performance across different modalities, and their outcomes in process-level evaluations, which are vital for tasks requiring complex reasoning with lengthy solutions. Our extensive evaluations reveal that even advanced models like GPT-4o only achieve a 39.97\% overall accuracy (28.67\% for mathematics and 29.71\% for physics), illustrating current AI limitations in complex reasoning and multimodal integration. Through the OlympicArena, we aim to advance AI towards superintelligence, equipping it to address more complex challenges in science and beyond. We also provide a comprehensive set of resources to support AI research, including a benchmark dataset, an open-source annotation platform, a detailed evaluation tool, and a leaderboard with automatic submission features.

Cite

Text

Huang et al. "OlympicArena: Benchmarking Multi-Discipline Cognitive Reasoning for Superintelligent AI." Neural Information Processing Systems, 2024. doi:10.52202/079017-0607

Markdown

[Huang et al. "OlympicArena: Benchmarking Multi-Discipline Cognitive Reasoning for Superintelligent AI." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/huang2024neurips-olympicarena/) doi:10.52202/079017-0607

BibTeX

@inproceedings{huang2024neurips-olympicarena,
  title     = {{OlympicArena: Benchmarking Multi-Discipline Cognitive Reasoning for Superintelligent AI}},
  author    = {Huang, Zhen and Wang, Zengzhi and Xia, Shijie and Li, Xuefeng and Zou, Haoyang and Xu, Ruijie and Fan, Run-Ze and Ye, Lyumanshan and Chern, Ethan and Ye, Yixin and Zhang, Yikai and Yang, Yuqing and Wu, Ting and Wang, Binjie and Sun, Shichao and Xiao, Yang and Li, Yiyuan and Zhou, Fan and Chern, Steffi and Qin, Yiwei and Ma, Yan and Su, Jiadi and Liu, Yixiu and Zheng, Yuxiang and Zhang, Shaoting and Lin, Dahua and Qiao, Yu and Liu, Pengfei},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-0607},
  url       = {https://mlanthology.org/neurips/2024/huang2024neurips-olympicarena/}
}