InductionBench: LLMs Fail in the Simplest Complexity Class
Abstract
Large language models (LLMs) have shown remarkable improvements in reasoning, largely due to intensive pretraining and scaling at inference time. Many existing benchmarks have been addressed by models such as o1 and o3 either fully or partially. However, a majority of these benchmarks emphasize deductive reasoning, including mathematical and coding tasks in which rules such as mathematical axioms or programming syntax are clearly defined, based on which LLMs can plan and apply these rules to arrive at a solution. In contrast, \textit{inductive reasoning}, where one infers the underlying rules from observed data, remains less explored. Such inductive processes lie at the heart of scientific discovery, as they enable researchers to extract general principles from empirical observations. To assess whether LLMs possess this capacity, we introduce \textbf{InductionBench}, a new benchmark designed to evaluate the inductive reasoning ability of LLMs. Our experimental findings reveal that even o3, the most advanced model available, struggles to master the simplest complexity classes within the subregular hierarchy, highlighting a notable deficiency in current LLMs' inductive reasoning capabilities.
Cite
Text
Hua et al. "InductionBench: LLMs Fail in the Simplest Complexity Class." ICLR 2025 Workshops: LLM_Reason_and_Plan, 2025.Markdown
[Hua et al. "InductionBench: LLMs Fail in the Simplest Complexity Class." ICLR 2025 Workshops: LLM_Reason_and_Plan, 2025.](https://mlanthology.org/iclrw/2025/hua2025iclrw-inductionbench/)BibTeX
@inproceedings{hua2025iclrw-inductionbench,
title = {{InductionBench: LLMs Fail in the Simplest Complexity Class}},
author = {Hua, Wenyue and Sun, Fei and Pan, Liangming and Jardine, Adam and Wang, William Yang},
booktitle = {ICLR 2025 Workshops: LLM_Reason_and_Plan},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/hua2025iclrw-inductionbench/}
}