Towards Neural Theorem Proving at Scale
Abstract
Neural models combining representation learning and reasoning in an end-to-end trainable manner are receiving increasing interest. However, their use is severely limited by their computational complexity, which renders them unusable on real world datasets. We focus on the Neural Theorem Prover model proposed by Rocktäschel and Riedel (2017), a continuous relaxation of the Prolog backward chaining algorithm where unification between terms is replaced by the similarity between their embedding representations. For answering a given query, this model needs to consider all possible proof paths, and then aggregate results -- this quickly becomes infeasible even for small Knowledge Bases. We observe that we can accurately approximate the inference process in this model by considering only proof paths associated with the highest proof scores. This enables inference and learning on previously impracticable Knowledge Bases.
Cite
Text
Minervini et al. "Towards Neural Theorem Proving at Scale." ICML 2018 Workshops: NAMPI, 2018.Markdown
[Minervini et al. "Towards Neural Theorem Proving at Scale." ICML 2018 Workshops: NAMPI, 2018.](https://mlanthology.org/icmlw/2018/minervini2018icmlw-neural/)BibTeX
@inproceedings{minervini2018icmlw-neural,
title = {{Towards Neural Theorem Proving at Scale}},
author = {Minervini, Pasquale and Bošnjak, Matko and Rocktäschel, Tim and Riedel, Sebastian},
booktitle = {ICML 2018 Workshops: NAMPI},
year = {2018},
url = {https://mlanthology.org/icmlw/2018/minervini2018icmlw-neural/}
}