Learning Bayesian Networks by Branching on Constraints
Abstract
We consider the Bayesian network structure learning problem, and present a new algorithm for enumerating the $k$ best Markov equivalence classes. This algorithm is score-based, but uses conditional independence constraints as a way to describe the search space of equivalence classes. The techniques we use here can potentially lead to the development of score-based methods that deal with more complex domains, such as the presence of latent confounders or feedback loops. We evaluate our algorithm’s performance on simulated continuous data.
Cite
Text
van Ommen. "Learning Bayesian Networks by Branching on Constraints." Proceedings of the Ninth International Conference on Probabilistic Graphical Models, 2018.Markdown
[van Ommen. "Learning Bayesian Networks by Branching on Constraints." Proceedings of the Ninth International Conference on Probabilistic Graphical Models, 2018.](https://mlanthology.org/pgm/2018/vanommen2018pgm-learning/)BibTeX
@inproceedings{vanommen2018pgm-learning,
title = {{Learning Bayesian Networks by Branching on Constraints}},
author = {van Ommen, Thijs},
booktitle = {Proceedings of the Ninth International Conference on Probabilistic Graphical Models},
year = {2018},
pages = {511-522},
volume = {72},
url = {https://mlanthology.org/pgm/2018/vanommen2018pgm-learning/}
}