Accelerating EM: An Empirical Study
Abstract
Many applications require that we learn the parameters of a model from data. EM (Expectation-Maximization) is a method for learning the parameters of probabilistic models with missing or hidden data. There are instances in which this method is slow to converge. Therefore, several accelerations have been proposed to improve the method. None of the proposed acceleration methods are theoretically dominant and experimental comparisons are lacking. In this paper, we present the different proposed accelerations and compare them experimentally. From the results of the experiments, we argue that some acceleration of EM is always possible, but that which acceleration is superior depends on properties of the problem.
Cite
Text
Ortiz and Kaelbling. "Accelerating EM: An Empirical Study." Conference on Uncertainty in Artificial Intelligence, 1999.Markdown
[Ortiz and Kaelbling. "Accelerating EM: An Empirical Study." Conference on Uncertainty in Artificial Intelligence, 1999.](https://mlanthology.org/uai/1999/ortiz1999uai-accelerating/)BibTeX
@inproceedings{ortiz1999uai-accelerating,
title = {{Accelerating EM: An Empirical Study}},
author = {Ortiz, Luis E. and Kaelbling, Leslie Pack},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {1999},
pages = {512-521},
url = {https://mlanthology.org/uai/1999/ortiz1999uai-accelerating/}
}