Active Learning for Hidden Markov Models: Objective Functions and Algorithms

Abstract

Hidden Markov Models (HMMs) model sequential data in many fields such as text/speech processing and biosignal analysis. Active learning algorithms learn faster and/or better by closing the data-gathering loop, i.e., they choose the examples most informative with respect to their learning objectives. We introduce a framework and objective functions for active learning in three fundamental HMM problems: model learning, state estimation, and path estimation. In addition, we describe a new set of algorithms for efficiently finding optimal greedy queries using these objective functions. The algorithms are fast, i.e., linear in the number of time steps to select the optimal query and we present empirical results showing that these algorithms can significantly reduce the need for labelled training data.

Cite

Text

Anderson and Moore. "Active Learning for Hidden Markov Models: Objective Functions and Algorithms." International Conference on Machine Learning, 2005. doi:10.1145/1102351.1102353

Markdown

[Anderson and Moore. "Active Learning for Hidden Markov Models: Objective Functions and Algorithms." International Conference on Machine Learning, 2005.](https://mlanthology.org/icml/2005/anderson2005icml-active/) doi:10.1145/1102351.1102353

BibTeX

@inproceedings{anderson2005icml-active,
  title     = {{Active Learning for Hidden Markov Models: Objective Functions and Algorithms}},
  author    = {Anderson, Brigham S. and Moore, Andrew},
  booktitle = {International Conference on Machine Learning},
  year      = {2005},
  pages     = {9-16},
  doi       = {10.1145/1102351.1102353},
  url       = {https://mlanthology.org/icml/2005/anderson2005icml-active/}
}