Learning Structurally Consistent Undirected Probabilistic Graphical Models
Abstract
In many real-world domains, undirected graphical models such as Markov random fields provide a more natural representation of the statistical dependency structure than directed graphical models. Unfortunately, structure learning of undirected graphs using likelihood-based scores remains difficult because of the intractability of computing the partition function. We describe a new Markov random field structure learning algorithm, motivated by canonical parameterization of Abbeel et al. We provide computational improvements on their parameterization by learning per-variable canonical factors, which makes our algorithm suitable for domains with hundreds of nodes. We compare our algorithm against several algorithms for learning undirected and directed models on simulated and real datasets from biology. Our algorithm frequently outperforms existing algorithms, producing higher-quality structures, suggesting that enforcing consistency during structure learning is beneficial for learning undirected graphs.
Cite
Text
Roy et al. "Learning Structurally Consistent Undirected Probabilistic Graphical Models." International Conference on Machine Learning, 2009. doi:10.1145/1553374.1553490Markdown
[Roy et al. "Learning Structurally Consistent Undirected Probabilistic Graphical Models." International Conference on Machine Learning, 2009.](https://mlanthology.org/icml/2009/roy2009icml-learning/) doi:10.1145/1553374.1553490BibTeX
@inproceedings{roy2009icml-learning,
title = {{Learning Structurally Consistent Undirected Probabilistic Graphical Models}},
author = {Roy, Sushmita and Lane, Terran and Werner-Washburne, Margaret},
booktitle = {International Conference on Machine Learning},
year = {2009},
pages = {905-912},
doi = {10.1145/1553374.1553490},
url = {https://mlanthology.org/icml/2009/roy2009icml-learning/}
}