Premise Selection for Theorem Proving by Deep Graph Embedding
Abstract
We propose a deep learning-based approach to the problem of premise selection: selecting mathematical statements relevant for proving a given conjecture. We represent a higher-order logic formula as a graph that is invariant to variable renaming but still fully preserves syntactic and semantic information. We then embed the graph into a vector via a novel embedding method that preserves the information of edge ordering. Our approach achieves state-of-the-art results on the HolStep dataset, improving the classification accuracy from 83% to 90.3%.
Cite
Text
Wang et al. "Premise Selection for Theorem Proving by Deep Graph Embedding." Neural Information Processing Systems, 2017.Markdown
[Wang et al. "Premise Selection for Theorem Proving by Deep Graph Embedding." Neural Information Processing Systems, 2017.](https://mlanthology.org/neurips/2017/wang2017neurips-premise/)BibTeX
@inproceedings{wang2017neurips-premise,
title = {{Premise Selection for Theorem Proving by Deep Graph Embedding}},
author = {Wang, Mingzhe and Tang, Yihe and Wang, Jian and Deng, Jia},
booktitle = {Neural Information Processing Systems},
year = {2017},
pages = {2786-2796},
url = {https://mlanthology.org/neurips/2017/wang2017neurips-premise/}
}