Structure Learning of Bayesian Networks Using Constraints

Abstract

This paper addresses exact learning of Bayesian network structure from data and expert's knowledge based on score functions that are decomposable. First, it describes useful properties that strongly reduce the time and memory costs of many known methods such as hill-climbing, dynamic programming and sampling variable orderings. Secondly, a branch and bound algorithm is presented that integrates parameter and structural constraints with data in a way to guarantee global optimality with respect to the score function. It is an any-time procedure because, if stopped, it provides the best current solution and an estimation about how far it is from the global solution. We show empirically the advantages of the properties and the constraints, and the applicability of the algorithm to large data sets (up to one hundred variables) that cannot be handled by other current methods (limited to around 30 variables).

Cite

Text

de Campos et al. "Structure Learning of Bayesian Networks Using Constraints." International Conference on Machine Learning, 2009. doi:10.1145/1553374.1553389

Markdown

[de Campos et al. "Structure Learning of Bayesian Networks Using Constraints." International Conference on Machine Learning, 2009.](https://mlanthology.org/icml/2009/decampos2009icml-structure/) doi:10.1145/1553374.1553389

BibTeX

@inproceedings{decampos2009icml-structure,
  title     = {{Structure Learning of Bayesian Networks Using Constraints}},
  author    = {de Campos, Cassio P. and Zeng, Zhi and Ji, Qiang},
  booktitle = {International Conference on Machine Learning},
  year      = {2009},
  pages     = {113-120},
  doi       = {10.1145/1553374.1553389},
  url       = {https://mlanthology.org/icml/2009/decampos2009icml-structure/}
}