Mining Math Conjectures from LLMs: A Pruning Approach

Abstract

We present a novel approach to generating mathematical conjectures using Large Language Models (LLMs). Focusing on the solubilizer, a relatively recent construct in group theory, we demonstrate how LLMs such as ChatGPT, Gemini, and Claude can be leveraged to generate conjectures. These conjectures are pruned by allowing the LLMs to generate counterexamples. Our results indicate that LLMs are capable of producing original conjectures that, while not groundbreaking, are either plausible or falsifiable via counterexamples, though they exhibit limitations in code execution.

Cite

Text

Chuharski et al. "Mining Math Conjectures from LLMs: A Pruning Approach." NeurIPS 2024 Workshops: MATH-AI, 2024.

Markdown

[Chuharski et al. "Mining Math Conjectures from LLMs: A Pruning Approach." NeurIPS 2024 Workshops: MATH-AI, 2024.](https://mlanthology.org/neuripsw/2024/chuharski2024neuripsw-mining/)

BibTeX

@inproceedings{chuharski2024neuripsw-mining,
  title     = {{Mining Math Conjectures from LLMs: A Pruning Approach}},
  author    = {Chuharski, Jake and Collins, Elias Rojas and Meringolo, Mark},
  booktitle = {NeurIPS 2024 Workshops: MATH-AI},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/chuharski2024neuripsw-mining/}
}