Factorial Hidden Markov Models
Abstract
We present a framework for learning in hidden Markov models with distributed state representations. Within this framework , we de(cid:173) rive a learning algorithm based on the Expectation-Maximization (EM) procedure for maximum likelihood estimation. Analogous to the standard Baum-Welch update rules, the M-step of our algo(cid:173) rithm is exact and can be solved analytically. However, due to the combinatorial nature of the hidden state representation, the exact E-step is intractable. A simple and tractable mean field approxima(cid:173) tion is derived. Empirical results on a set of problems suggest that both the mean field approximation and Gibbs sampling are viable alternatives to the computationally expensive exact algorithm.
Cite
Text
Ghahramani and Jordan. "Factorial Hidden Markov Models." Neural Information Processing Systems, 1995.Markdown
[Ghahramani and Jordan. "Factorial Hidden Markov Models." Neural Information Processing Systems, 1995.](https://mlanthology.org/neurips/1995/ghahramani1995neurips-factorial/)BibTeX
@inproceedings{ghahramani1995neurips-factorial,
title = {{Factorial Hidden Markov Models}},
author = {Ghahramani, Zoubin and Jordan, Michael I.},
booktitle = {Neural Information Processing Systems},
year = {1995},
pages = {472-478},
url = {https://mlanthology.org/neurips/1995/ghahramani1995neurips-factorial/}
}