Autoformalization with Large Language Models
Abstract
Autoformalization is the process of automatically translating from natural language mathematics to formal specifications and proofs. A successful autoformalization system could advance the fields of formal verification, program synthesis, and artificial intelligence.While the long-term goal of autoformalization seemed elusive for a long time, we show large language models provide new prospects towards this goal. We make the surprising observation that LLMs can correctly translate a significant portion ($25.3\%$) of mathematical competition problems perfectly to formal specifications in Isabelle/HOL. We demonstrate the usefulness of this process by improving a previously introduced neural theorem prover via training on these autoformalized theorems. Our methodology results in a new state-of-the-art result on the MiniF2F theorem proving benchmark, improving the proof rate from~$29.6\%$ to~$35.2\%$.
Cite
Text
Wu et al. "Autoformalization with Large Language Models." Neural Information Processing Systems, 2022.Markdown
[Wu et al. "Autoformalization with Large Language Models." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/wu2022neurips-autoformalization/)BibTeX
@inproceedings{wu2022neurips-autoformalization,
title = {{Autoformalization with Large Language Models}},
author = {Wu, Yuhuai and Jiang, Albert Qiaochu and Li, Wenda and Rabe, Markus and Staats, Charles and Jamnik, Mateja and Szegedy, Christian},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/wu2022neurips-autoformalization/}
}