Collapsed Variational Bayesian Inference for Hidden Markov Models

Abstract

Approximate inference for Bayesian models is dominated by two approaches, variational Bayesian inference and Markov Chain Monte Carlo. Both approaches have their own advantages and disadvantages, and they can complement each other. Recently researchers have proposed collapsed variational Bayesian inference to combine the advantages of both. Such inference methods have been successful in several models whose hidden variables are conditionally independent given the parameters. In this paper we propose two collapsed variational Bayesian inference algorithms for hidden Markov models, a popular framework for representing time series data. We validate our algorithms on the natural language processing task of unsupervised part-of-speech induction, showing that they are both more computationally efficient than sampling, and more accurate than standard variational Bayesian inference for HMMs.

Cite

Text

Wang and Blunsom. "Collapsed Variational Bayesian Inference for Hidden Markov Models." International Conference on Artificial Intelligence and Statistics, 2013.

Markdown

[Wang and Blunsom. "Collapsed Variational Bayesian Inference for Hidden Markov Models." International Conference on Artificial Intelligence and Statistics, 2013.](https://mlanthology.org/aistats/2013/wang2013aistats-collapsed/)

BibTeX

@inproceedings{wang2013aistats-collapsed,
  title     = {{Collapsed Variational Bayesian Inference for Hidden Markov Models}},
  author    = {Wang, Pengyu and Blunsom, Phil},
  booktitle = {International Conference on Artificial Intelligence and Statistics},
  year      = {2013},
  pages     = {599-607},
  url       = {https://mlanthology.org/aistats/2013/wang2013aistats-collapsed/}
}