Markov Logic Networks for Knowledge Base Completion: A Theoretical Analysis Under the MCAR Assumption

Abstract

We study the following question. We are given a knowledge base in which some facts are missing. We learn the weights of a Markov logic network using maximum likelihood estimation on this knowledge base and then use the learned Markov logic network to predict the missing facts. Assuming that the facts are missing independently and with the same probability, can we say that this approach is consistent in some precise sense? This is a non-trivial question because we are learning from only one training example. In this paper we show that the answer to this question is positive.

Cite

Text

Kuželka and Davis. "Markov Logic Networks for Knowledge Base Completion: A Theoretical Analysis Under the MCAR Assumption." Uncertainty in Artificial Intelligence, 2019.

Markdown

[Kuželka and Davis. "Markov Logic Networks for Knowledge Base Completion: A Theoretical Analysis Under the MCAR Assumption." Uncertainty in Artificial Intelligence, 2019.](https://mlanthology.org/uai/2019/kuzelka2019uai-markov/)

BibTeX

@inproceedings{kuzelka2019uai-markov,
  title     = {{Markov Logic Networks for Knowledge Base Completion: A Theoretical Analysis Under the MCAR Assumption}},
  author    = {Kuželka, Ondřej and Davis, Jesse},
  booktitle = {Uncertainty in Artificial Intelligence},
  year      = {2019},
  pages     = {1138-1148},
  volume    = {115},
  url       = {https://mlanthology.org/uai/2019/kuzelka2019uai-markov/}
}