The Infinite Hidden Markov Model

Abstract

We show that it is possible to extend hidden Markov models to have a countably infinite number of hidden states. By using the theory of Dirichlet processes we can implicitly integrate out the infinitely many transition parameters, leaving only three hyperparameters which can be learned from data. These three hyperparameters define a hierarchical Dirichlet process capable of capturing a rich set of transition dynamics. The three hyperparameters control the time scale of the dynamics, the sparsity of the underlying state-transition matrix, and the expected num- ber of distinct hidden states in a finite sequence. In this framework it is also natural to allow the alphabet of emitted symbols to be infinite— consider, for example, symbols being possible words appearing in En- glish text.

Cite

Text

Beal et al. "The Infinite Hidden Markov Model." Neural Information Processing Systems, 2001.

Markdown

[Beal et al. "The Infinite Hidden Markov Model." Neural Information Processing Systems, 2001.](https://mlanthology.org/neurips/2001/beal2001neurips-infinite/)

BibTeX

@inproceedings{beal2001neurips-infinite,
  title     = {{The Infinite Hidden Markov Model}},
  author    = {Beal, Matthew J. and Ghahramani, Zoubin and Rasmussen, Carl E.},
  booktitle = {Neural Information Processing Systems},
  year      = {2001},
  pages     = {577-584},
  url       = {https://mlanthology.org/neurips/2001/beal2001neurips-infinite/}
}