Library Learning Doesn’t: The Curious Case of the Single-Use “Library”

Abstract

Advances in Large Language Models (LLMs) have spurred a wave of LLM library learning systems for mathematical reasoning. These systems aim to learn a reusable library of *tools*, such as formal Isabelle lemmas or Python programs that are tailored to a family of tasks. Many of these systems are inspired by the human structuring of knowledge into reusable and extendable concepts, but do current methods actually learn reusable libraries of tools? We study two library learning systems for mathematics which both reported increased accuracy: LEGO-Prover and TroVE. We find that function reuse is extremely infrequent on miniF2F and MATH. Our followup ablation experiments suggest that, rather than reuse, self-correction and self-consistency are the primary drivers of the observed performance gains. Our code and data are available at https://github.com/ikb-a/curious-case.

Cite

Text

Berlot-Attwell et al. "Library Learning Doesn’t: The Curious Case of the Single-Use “Library”." NeurIPS 2024 Workshops: MATH-AI, 2024.

Markdown

[Berlot-Attwell et al. "Library Learning Doesn’t: The Curious Case of the Single-Use “Library”." NeurIPS 2024 Workshops: MATH-AI, 2024.](https://mlanthology.org/neuripsw/2024/berlotattwell2024neuripsw-library/)

BibTeX

@inproceedings{berlotattwell2024neuripsw-library,
  title     = {{Library Learning Doesn’t: The Curious Case of the Single-Use “Library”}},
  author    = {Berlot-Attwell, Ian and Rudzicz, Frank and Si, Xujie},
  booktitle = {NeurIPS 2024 Workshops: MATH-AI},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/berlotattwell2024neuripsw-library/}
}