ARB: Advanced Reasoning Benchmark for Large Language Models

Abstract

Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. We find promising agreement between annotators and GPT-4 rubric evaluation scores.

Cite

Text

Sawada et al. "ARB: Advanced Reasoning Benchmark for Large Language Models." NeurIPS 2023 Workshops: MATH-AI, 2023.

Markdown

[Sawada et al. "ARB: Advanced Reasoning Benchmark for Large Language Models." NeurIPS 2023 Workshops: MATH-AI, 2023.](https://mlanthology.org/neuripsw/2023/sawada2023neuripsw-arb/)

BibTeX

@inproceedings{sawada2023neuripsw-arb,
  title     = {{ARB: Advanced Reasoning Benchmark for Large Language Models}},
  author    = {Sawada, Tomohiro and Paleka, Daniel and Havrilla, Alexander and Tadepalli, Pranav and Vidas, Paula and Kranias, Alexander and Nay, John and Gupta, Kshitij and Komatsuzaki, Aran},
  booktitle = {NeurIPS 2023 Workshops: MATH-AI},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/sawada2023neuripsw-arb/}
}