Approximate Expectation Maximization
Abstract
We discuss the integration of the expectation-maximization (EM) algorithm for maximum likelihood learning of Bayesian networks with belief propagation algorithms for approximate inference. Specifically we propose to combine the outer-loop step of convergent belief propagation algorithms with the M-step of the EM algorithm. This then yields an approximate EM algorithm that is essentially still double loop, with the important advantage of an inner loop that is guaranteed to converge. Simulations illustrate the merits of such an approach.
Cite
Text
Heskes et al. "Approximate Expectation Maximization." Neural Information Processing Systems, 2003.Markdown
[Heskes et al. "Approximate Expectation Maximization." Neural Information Processing Systems, 2003.](https://mlanthology.org/neurips/2003/heskes2003neurips-approximate/)BibTeX
@inproceedings{heskes2003neurips-approximate,
title = {{Approximate Expectation Maximization}},
author = {Heskes, Tom and Zoeter, Onno and Wiegerinck, Wim},
booktitle = {Neural Information Processing Systems},
year = {2003},
pages = {353-360},
url = {https://mlanthology.org/neurips/2003/heskes2003neurips-approximate/}
}