A Careful Examination of Large Language Model Performance on Grade School Arithmetic

Abstract

Large language models (LLMs) have achieved impressive success on many benchmarks for mathematical reasoning.However, there is growing concern that some of this performance actually reflects dataset contamination, where data closely resembling benchmark questions leaks into the training data, instead of true reasoning ability.To investigate this claim rigorously, we commission Grade School Math 1000 (GSM1k). GSM1k is designed to mirror the style and complexity of the established GSM8k benchmark,the gold standard for measuring elementary mathematical reasoning. We ensure that the two benchmarks are comparable across important metrics such as human solve rates, number of steps in solution, answer magnitude, and more.When evaluating leading open- and closed-source LLMs on GSM1k, we observe accuracy drops of up to 8%, with several families of models showing evidence of systematic overfitting across almost all model sizes.Further analysis suggests a positive relationship (Spearman's r^2=0.36) between a model's probability of generating an example from GSM8k and its performance gap between GSM8k and GSM1k, suggesting that some models may have partially memorized GSM8k.Nevertheless, many models, especially those on the frontier, show minimal signs of overfitting, and all models broadly demonstrate generalization to novel math problems guaranteed to not be in their training data.

Cite

Text

Zhang et al. "A Careful Examination of Large Language Model Performance on Grade School Arithmetic." Neural Information Processing Systems, 2024. doi:10.52202/079017-1485

Markdown

[Zhang et al. "A Careful Examination of Large Language Model Performance on Grade School Arithmetic." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/zhang2024neurips-careful/) doi:10.52202/079017-1485

BibTeX

@inproceedings{zhang2024neurips-careful,
  title     = {{A Careful Examination of Large Language Model Performance on Grade School Arithmetic}},
  author    = {Zhang, Hugh and Da, Jeff and Lee, Dean and Robinson, Vaughn and Wu, Catherine and Song, Will and Zhao, Tiffany and Raja, Pranav and Zhuang, Charlotte and Slack, Dylan and Lyu, Qin and Hendryx, Sean and Kaplan, Russell and Lunati, Michele and Yue, Summer},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-1485},
  url       = {https://mlanthology.org/neurips/2024/zhang2024neurips-careful/}
}