Scalable Neural Methods for Reasoning with a Symbolic Knowledge Base

Abstract

We describe a novel way of representing a symbolic knowledge base (KB) called a sparse-matrix reified KB. This representation enables neural modules that are fully differentiable, faithful to the original semantics of the KB, expressive enough to model multi-hop inferences, and scalable enough to use with realistically large KBs. The sparse-matrix reified KB can be distributed across multiple GPUs, can scale to tens of millions of entities and facts, and is orders of magnitude faster than naive sparse-matrix implementations. The reified KB enables very simple end-to-end architectures to obtain competitive performance on several benchmarks representing two families of tasks: KB completion, and learning semantic parsers from denotations.

Cite

Text

Cohen et al. "Scalable Neural Methods for Reasoning with a Symbolic Knowledge Base." International Conference on Learning Representations, 2020.

Markdown

[Cohen et al. "Scalable Neural Methods for Reasoning with a Symbolic Knowledge Base." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/cohen2020iclr-scalable/)

BibTeX

@inproceedings{cohen2020iclr-scalable,
  title     = {{Scalable Neural Methods for Reasoning with a Symbolic Knowledge Base}},
  author    = {Cohen, William W. and Sun, Haitian and Hofer, R. Alex and Siegler, Matthew},
  booktitle = {International Conference on Learning Representations},
  year      = {2020},
  url       = {https://mlanthology.org/iclr/2020/cohen2020iclr-scalable/}
}