Scaled CGEM: A Fast Accelerated EM
Abstract
The EM algorithm is a popular method for maximum likelihood estimation of Bayesian networks in the presence of missing data. Its simplicity and general convergence properties make it very attractive. However, it sometimes converges slowly. Several accelerated EM methods based on gradient-based optimization techniques have been proposed. In principle, they all employ a line search involving several NP-hard likelihood evaluations. We propose a novel acceleration called SCGEM based on scaled conjugate gradients (SCGs) well-known from learning neural networks. SCGEM avoids the line search by adopting the scaling mechanism of SCGs applied to the expected information matrix. This guarantees a single likelihood evaluation per iteration. We empirically compare SCGEM with EM and conventional conjugate gradient accelerated EM. The experiments show that SCGEM can significantly accelerate both of them and is equal in quality.
Cite
Text
Fischer and Kersting. "Scaled CGEM: A Fast Accelerated EM." European Conference on Machine Learning, 2003. doi:10.1007/978-3-540-39857-8_14Markdown
[Fischer and Kersting. "Scaled CGEM: A Fast Accelerated EM." European Conference on Machine Learning, 2003.](https://mlanthology.org/ecmlpkdd/2003/fischer2003ecml-scaled/) doi:10.1007/978-3-540-39857-8_14BibTeX
@inproceedings{fischer2003ecml-scaled,
title = {{Scaled CGEM: A Fast Accelerated EM}},
author = {Fischer, Jörg and Kersting, Kristian},
booktitle = {European Conference on Machine Learning},
year = {2003},
pages = {133-144},
doi = {10.1007/978-3-540-39857-8_14},
url = {https://mlanthology.org/ecmlpkdd/2003/fischer2003ecml-scaled/}
}