Scalable Training of Markov Logic Networks Using Approximate Counting

Abstract

In this paper, we propose principled weight learning algorithms for Markov logic networks that can easily scale to much larger datasets and application domains than existing algorithms. The main idea in our approach is to use approximate counting techniques to substantially reduce the complexity of the most computation intensive sub-step in weight learning: computing the number of groundings of a first-order formula that evaluate to true given a truth assignment to all the random variables. We derive theoretical bounds on the performance of our new algorithms and demonstrate experimentally that they are orders of magnitude faster and achieve the same accuracy or better than existing approaches.

Cite

Text

Sarkhel et al. "Scalable Training of Markov Logic Networks Using Approximate Counting." AAAI Conference on Artificial Intelligence, 2016. doi:10.1609/AAAI.V30I1.10119

Markdown

[Sarkhel et al. "Scalable Training of Markov Logic Networks Using Approximate Counting." AAAI Conference on Artificial Intelligence, 2016.](https://mlanthology.org/aaai/2016/sarkhel2016aaai-scalable/) doi:10.1609/AAAI.V30I1.10119

BibTeX

@inproceedings{sarkhel2016aaai-scalable,
  title     = {{Scalable Training of Markov Logic Networks Using Approximate Counting}},
  author    = {Sarkhel, Somdeb and Venugopal, Deepak and Pham, Tuan Anh and Singla, Parag and Gogate, Vibhav},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2016},
  pages     = {1067-1073},
  doi       = {10.1609/AAAI.V30I1.10119},
  url       = {https://mlanthology.org/aaai/2016/sarkhel2016aaai-scalable/}
}