Learning Markov Logic Networks Using Structural Motifs
Abstract
Markov logic networks (MLNs) use first-order formulas to define features of Markov networks. Current MLN structure learners can only learn short clauses (4-5 literals) due to extreme computational costs, and thus are unable to represent complex regularities in data. To address this problem, we present LSM, the first MLN structure learner capable of efficiently and accurately learning long clauses. LSM is based on the observation that relational data typically contains patterns that are variations of the same structural motifs. By constraining the search for clauses to occur within motifs, LSM can greatly speed up the search and thereby reduce the cost of finding long clauses. LSM uses random walks to identify densely connected objects in data, and groups them and their associated relations into a motif. Our experiments on three real-world datasets show that our approach is 2-5 orders of magnitude faster than the state-of-the-art ones, while achieving the same or better predictive performance.
Cite
Text
Kok and Domingos. "Learning Markov Logic Networks Using Structural Motifs." International Conference on Machine Learning, 2010.Markdown
[Kok and Domingos. "Learning Markov Logic Networks Using Structural Motifs." International Conference on Machine Learning, 2010.](https://mlanthology.org/icml/2010/kok2010icml-learning/)BibTeX
@inproceedings{kok2010icml-learning,
title = {{Learning Markov Logic Networks Using Structural Motifs}},
author = {Kok, Stanley and Domingos, Pedro M.},
booktitle = {International Conference on Machine Learning},
year = {2010},
pages = {551-558},
url = {https://mlanthology.org/icml/2010/kok2010icml-learning/}
}