Small-Variance Asymptotics for Hidden Markov Models
Abstract
Small-variance asymptotics provide an emerging technique for obtaining scalable combinatorial algorithms from rich probabilistic models. We present a small-variance asymptotic analysis of the Hidden Markov Model and its infinite-state Bayesian nonparametric extension. Starting with the standard HMM, we first derive a “hard” inference algorithm analogous to k-means that arises when particular variances in the model tend to zero. This analysis is then extended to the Bayesian nonparametric case, yielding a simple, scalable, and flexible algorithm for discrete-state sequence data with a non-fixed number of states. We also derive the corresponding combinatorial objective functions arising from our analysis, which involve a k-means-like term along with penalties based on state transitions and the number of states. A key property of such algorithms is that — particularly in the nonparametric setting — standard probabilistic inference algorithms lack scalability and are heavily dependent on good initialization. A number of results on synthetic and real data sets demonstrate the advantages of the proposed framework.
Cite
Text
Roychowdhury et al. "Small-Variance Asymptotics for Hidden Markov Models." Neural Information Processing Systems, 2013.Markdown
[Roychowdhury et al. "Small-Variance Asymptotics for Hidden Markov Models." Neural Information Processing Systems, 2013.](https://mlanthology.org/neurips/2013/roychowdhury2013neurips-smallvariance/)BibTeX
@inproceedings{roychowdhury2013neurips-smallvariance,
title = {{Small-Variance Asymptotics for Hidden Markov Models}},
author = {Roychowdhury, Anirban and Jiang, Ke and Kulis, Brian},
booktitle = {Neural Information Processing Systems},
year = {2013},
pages = {2103-2111},
url = {https://mlanthology.org/neurips/2013/roychowdhury2013neurips-smallvariance/}
}