Hidden Markov Model Induction by Bayesian Model Merging

Abstract

This paper describes a technique for learning both the number of states and the topology of Hidden Markov Models from examples. The induction process starts with the most specific model consistent with the training data and generalizes by successively merging states. Both the choice of states to merge and the stopping criterion are guided by the Bayesian posterior probability. We compare our algorithm with the Baum-Welch method of estimating fixed-size models, and find that it can induce minimal HMMs from data in cases where fixed estimation does not converge or requires redundant parameters to converge.

Cite

Text

Stolcke and Omohundro. "Hidden Markov Model Induction by Bayesian Model Merging." Neural Information Processing Systems, 1992.

Markdown

[Stolcke and Omohundro. "Hidden Markov Model Induction by Bayesian Model Merging." Neural Information Processing Systems, 1992.](https://mlanthology.org/neurips/1992/stolcke1992neurips-hidden/)

BibTeX

@inproceedings{stolcke1992neurips-hidden,
  title     = {{Hidden Markov Model Induction by Bayesian Model Merging}},
  author    = {Stolcke, Andreas and Omohundro, Stephen},
  booktitle = {Neural Information Processing Systems},
  year      = {1992},
  pages     = {11-18},
  url       = {https://mlanthology.org/neurips/1992/stolcke1992neurips-hidden/}
}