Learning Interestingness in Automated Mathematical Theory Formation

Abstract

We take two key steps in automating the open-ended discovery of new mathematical theories, a grand challenge in artificial intelligence. First, we introduce Fermat, a reinforcement learning (RL) environment that models concept discovery and theorem-proving using a set of symbolic actions, opening up a range of RL problems relevant to theory discovery. Second, we explore a specific problem through Fermat: automatically scoring the interestingness of mathematical objects. We investigate evolutionary algorithms for synthesizing nontrivial interestingness measures. In particular, we introduce an LLM-based evolutionary algorithm that features function abstraction, leading to notable improvements in discovering elementary number theory and finite fields over hard-coded baselines. We open-source the \fermat environment at github.com/trishullab/Fermat.

Cite

Text

Tsoukalas et al. "Learning Interestingness in Automated Mathematical Theory Formation." Advances in Neural Information Processing Systems, 2025.

Markdown

[Tsoukalas et al. "Learning Interestingness in Automated Mathematical Theory Formation." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/tsoukalas2025neurips-learning/)

BibTeX

@inproceedings{tsoukalas2025neurips-learning,
  title     = {{Learning Interestingness in Automated Mathematical Theory Formation}},
  author    = {Tsoukalas, George and Saha, Rahul and Thakur, Amitayush and Reguyal, Sabrina and Chaudhuri, Swarat},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/tsoukalas2025neurips-learning/}
}