Expectation Maximization of Forward Decoding Kernel Machines

Abstract

Forward Decoding Kernel Machines (FDKM) combine large-margin kernel classifiers with Hidden Markov Models (HMM) for Maximum a Posteriori (MAP) adaptive sequence estimation. This paper proposes a variant on FDKM training using ExpectationMaximization (EM). Parameterization of the expectation step controls the temporal extent of the context used in correcting noisy and missing labels in the training sequence. Experiments with EM-FDKM on TIMIT phone sequence data demonstrate up to $10 %$ improvement in classification performance over FDKM trained with hard transitions between labels.

Cite

Text

Chakrabartty and Cauwenberghs. "Expectation Maximization of Forward Decoding Kernel Machines." Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, 2003.

Markdown

[Chakrabartty and Cauwenberghs. "Expectation Maximization of Forward Decoding Kernel Machines." Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, 2003.](https://mlanthology.org/aistats/2003/chakrabartty2003aistats-expectation/)

BibTeX

@inproceedings{chakrabartty2003aistats-expectation,
  title     = {{Expectation Maximization of Forward Decoding Kernel Machines}},
  author    = {Chakrabartty, Shantanu and Cauwenberghs, Gert},
  booktitle = {Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics},
  year      = {2003},
  pages     = {65-71},
  volume    = {R4},
  url       = {https://mlanthology.org/aistats/2003/chakrabartty2003aistats-expectation/}
}