Expectation Maximization Algorithms for Conditional Likelihoods
Abstract
We introduce an expectation maximization-type (EM) algorithm for maximum likelihood optimization of conditional densities. It is applicable to hidden variable models where the distributions are from the exponential family. The algorithm can alternatively be viewed as automatic step size selection for gradient ascent, where the amount of computation is traded off to guarantees that each step increases the likelihood. The tradeoff makes the algorithm computationally more feasible than the earlier conditional EM. The method gives a theoretical basis for extended Baum Welch algorithms used in discriminative hidden Markov models in speech recognition, and compares favourably with the current best method in the experiments.
Cite
Text
Salojärvi et al. "Expectation Maximization Algorithms for Conditional Likelihoods." International Conference on Machine Learning, 2005. doi:10.1145/1102351.1102446Markdown
[Salojärvi et al. "Expectation Maximization Algorithms for Conditional Likelihoods." International Conference on Machine Learning, 2005.](https://mlanthology.org/icml/2005/salojarvi2005icml-expectation/) doi:10.1145/1102351.1102446BibTeX
@inproceedings{salojarvi2005icml-expectation,
title = {{Expectation Maximization Algorithms for Conditional Likelihoods}},
author = {Salojärvi, Jarkko and Puolamäki, Kai and Kaski, Samuel},
booktitle = {International Conference on Machine Learning},
year = {2005},
pages = {752-759},
doi = {10.1145/1102351.1102446},
url = {https://mlanthology.org/icml/2005/salojarvi2005icml-expectation/}
}