Mathematical Reasoning via Self-Supervised Skip-Tree Training
Abstract
We demonstrate that self-supervised language modeling applied to mathematical formulas enables logical reasoning. To measure the logical reasoning abilities of language models, we formulate several evaluation (downstream) tasks, such as inferring types, suggesting missing assumptions and completing equalities. For training language models for formal mathematics, we propose a novel skip-tree task. We find that models trained on the skip-tree task show surprisingly strong mathematical reasoning abilities, and outperform models trained on standard skip-sequence tasks. We also analyze the models' ability to formulate new conjectures by measuring how often the predictions are provable and useful in other proofs.
Cite
Text
Rabe et al. "Mathematical Reasoning via Self-Supervised Skip-Tree Training." International Conference on Learning Representations, 2021.Markdown
[Rabe et al. "Mathematical Reasoning via Self-Supervised Skip-Tree Training." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/rabe2021iclr-mathematical/)BibTeX
@inproceedings{rabe2021iclr-mathematical,
title = {{Mathematical Reasoning via Self-Supervised Skip-Tree Training}},
author = {Rabe, Markus Norman and Lee, Dennis and Bansal, Kshitij and Szegedy, Christian},
booktitle = {International Conference on Learning Representations},
year = {2021},
url = {https://mlanthology.org/iclr/2021/rabe2021iclr-mathematical/}
}