Benchmarking Abstract and Reasoning Abilities Through a Theoretical Perspective

Abstract

In this paper, we aim to establish a simple, effective, and theoretically grounded benchmark for rigorously probing abstract reasoning in Large Language Models (LLMs). To achieve this, we first develop a mathematic framework that defines abstract reasoning as the ability to: (i) extract essential patterns independent of surface representations, and (ii) apply consistent rules to these abstract patterns. Based on this framework, we introduce two novel complementary metrics: $\Gamma$ measures basic reasoning accuracy, while $\Delta$ quantifies a model’s reliance on specific symbols rather than underlying patterns - a key indicator of true abstraction versus mere memorization. To implement this measurement, we design a benchmark: systematic symbol remapping in rule-based tasks, which forces models to demonstrate genuine pattern recognition beyond superficial token matching. Extensive LLM evaluations using this benchmark (commercial API models, 7B-70B, multi-agent) reveal:1) critical limitations in non-decimal arithmetic and symbolic reasoning; 2) persistent abstraction gaps despite chain-of-thought prompting; and 3) $\Delta$’s effectiveness in robustly measuring memory dependence by quantifying performance degradation under symbol remapping, particularly highlighting operand-specific memorization. These findings underscore that current LLMs, despite domain-specific strengths, still lack robust abstract reasoning, highlighting key areas for future improvement.

Cite

Text

Ma et al. "Benchmarking Abstract and Reasoning Abilities Through a Theoretical Perspective." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Ma et al. "Benchmarking Abstract and Reasoning Abilities Through a Theoretical Perspective." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/ma2025icml-benchmarking/)

BibTeX

@inproceedings{ma2025icml-benchmarking,
  title     = {{Benchmarking Abstract and Reasoning Abilities Through a Theoretical Perspective}},
  author    = {Ma, Qingchuan and Wu, Yuhang and Zheng, Xiawu and Ji, Rongrong},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {42209-42235},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/ma2025icml-benchmarking/}
}