Bayesian Network Parameter Learning Using EM with Parameter Sharing

Abstract

This paper explores the e↵ects of parameter sharing on Bayesian network (BN) parameter learning when there is incomplete data. Us-ing the Expectation Maximization (EM) al-gorithm, we investigate how varying degrees of parameter sharing, varying number of hid-den nodes, and di↵erent dataset sizes impact EM performance. The specific metrics of EM performance examined are: likelihood, error, and the number of iterations required for convergence. These metrics are important in a number of applications, and we empha-size learning of BNs for diagnosis of electrical power systems. One main point, which we investigate both analytically and empirically, is how parameter sharing impacts the error associated with EM’s parameter estimates. 1

Cite

Text

Reed and Mengshoel. "Bayesian Network Parameter Learning Using EM with Parameter Sharing." Conference on Uncertainty in Artificial Intelligence, 2014.

Markdown

[Reed and Mengshoel. "Bayesian Network Parameter Learning Using EM with Parameter Sharing." Conference on Uncertainty in Artificial Intelligence, 2014.](https://mlanthology.org/uai/2014/reed2014uai-bayesian/)

BibTeX

@inproceedings{reed2014uai-bayesian,
  title     = {{Bayesian Network Parameter Learning Using EM with Parameter Sharing}},
  author    = {Reed, Erik and Mengshoel, Ole J.},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {2014},
  pages     = {48-59},
  url       = {https://mlanthology.org/uai/2014/reed2014uai-bayesian/}
}