Forward-Decoding Kernel-Based Phone Recognition
Abstract
Forward decoding kernel machines (FDKM) combine large-margin clas(cid:173) sifiers with hidden Markov models (HMM) for maximum a posteriori (MAP) adaptive sequence estimation. State transitions in the sequence are conditioned on observed data using a kernel-based probability model trained with a recursive scheme that deals effectively with noisy and par(cid:173) tially labeled data. Training over very large data sets is accomplished us(cid:173) ing a sparse probabilistic support vector machine (SVM) model based on quadratic entropy, and an on-line stochastic steepest descent algorithm. For speaker-independent continuous phone recognition, FDKM trained over 177 ,080 samples of the TlMIT database achieves 80.6% recognition accuracy over the full test set, without use of a prior phonetic language model.
Cite
Text
Chakrabartty and Cauwenberghs. "Forward-Decoding Kernel-Based Phone Recognition." Neural Information Processing Systems, 2002.Markdown
[Chakrabartty and Cauwenberghs. "Forward-Decoding Kernel-Based Phone Recognition." Neural Information Processing Systems, 2002.](https://mlanthology.org/neurips/2002/chakrabartty2002neurips-forwarddecoding/)BibTeX
@inproceedings{chakrabartty2002neurips-forwarddecoding,
title = {{Forward-Decoding Kernel-Based Phone Recognition}},
author = {Chakrabartty, Shantanu and Cauwenberghs, Gert},
booktitle = {Neural Information Processing Systems},
year = {2002},
pages = {1189-1196},
url = {https://mlanthology.org/neurips/2002/chakrabartty2002neurips-forwarddecoding/}
}