PAC-Learning of Markov Models with Hidden State

Abstract

The standard approach for learning Markov Models with Hidden State uses the Expectation-Maximization framework. While this approach had a significant impact on several practical applications (e.g. speech recognition, biological sequence alignment) it has two major limitations: it requires a known model topology, and learning is only locally optimal. We propose a new PAC framework for learning both the topology and the parameters in partially observable Markov models. Our algorithm learns a Probabilistic Deterministic Finite Automata (PDFA) which approximates a Hidden Markov Model (HMM) up to some desired degree of accuracy. We discuss theoretical conditions under which the algorithm produces an optimal solution (in the PAC-sense) and demonstrate promising performance on simple dynamical systems.

Cite

Text

Gavaldà et al. "PAC-Learning of Markov Models with Hidden State." European Conference on Machine Learning, 2006. doi:10.1007/11871842_18

Markdown

[Gavaldà et al. "PAC-Learning of Markov Models with Hidden State." European Conference on Machine Learning, 2006.](https://mlanthology.org/ecmlpkdd/2006/gavalda2006ecml-paclearning/) doi:10.1007/11871842_18

BibTeX

@inproceedings{gavalda2006ecml-paclearning,
  title     = {{PAC-Learning of Markov Models with Hidden State}},
  author    = {Gavaldà, Ricard and Keller, Philipp W. and Pineau, Joelle and Precup, Doina},
  booktitle = {European Conference on Machine Learning},
  year      = {2006},
  pages     = {150-161},
  doi       = {10.1007/11871842_18},
  url       = {https://mlanthology.org/ecmlpkdd/2006/gavalda2006ecml-paclearning/}
}