Learning and Inference in Tractable Probabilistic Knowledge Bases

Abstract

Building efficient large-scale knowledge bases (KBs) is a longstanding goal of AI. KBs need to be first-order to be sufficiently expressive, and probabilistic to handle uncertainty, but these lead to intractable inference. Recently, tractable Markov logic (TML) was proposed as the first non-trivial tractable first-order probabilistic representation. This paper describes the first inference and learning algorithms for TML, and its first application to real-world problems. Inference time per query is sublinear in the size of the KB, and supports very large KBs via a disk-based implementation using a relational database engine, and parallelization. Query answering is fast enough for interactive and real-time use. We show that, despite the data being non-i.i.d. in general, maximum likelihood parameters for TML knowledge bases can be computed in closed form. We use our algorithms to build a very large tractable probabilistic KB from numerous heterogeneous data sets. The KB includes millions of objects and billions of parameters. Our experiments show that the learned KB is competitive with existing approaches on challenging tasks in information extraction and integration.

Cite

Text

Niepert and Domingos. "Learning and Inference in Tractable Probabilistic Knowledge Bases." Conference on Uncertainty in Artificial Intelligence, 2015.

Markdown

[Niepert and Domingos. "Learning and Inference in Tractable Probabilistic Knowledge Bases." Conference on Uncertainty in Artificial Intelligence, 2015.](https://mlanthology.org/uai/2015/niepert2015uai-learning/)

BibTeX

@inproceedings{niepert2015uai-learning,
  title     = {{Learning and Inference in Tractable Probabilistic Knowledge Bases}},
  author    = {Niepert, Mathias and Domingos, Pedro M.},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {2015},
  pages     = {632-641},
  url       = {https://mlanthology.org/uai/2015/niepert2015uai-learning/}
}