Modeling Consistency in a Speaker Independent Continuous Speech Recognition System
Abstract
We would like to incorporate speaker-dependent consistencies, such as gender, in an otherwise speaker-independent speech recognition system. In this paper we discuss a Gender Dependent Neural Network (GDNN) which can be tuned for each gender, while sharing most of the speaker independent parameters. We use a classification network to help generate gender-dependent phonetic probabilities for a statistical (HMM) recogni(cid:173) tion system. The gender classification net predicts the gender with high accuracy, 98.3% on a Resource Management test set. However, the in(cid:173) tegration of the GDNN into our hybrid HMM-neural network recognizer provided an improvement in the recognition score that is not statistically significant on a Resource Management test set.
Cite
Text
Konig et al. "Modeling Consistency in a Speaker Independent Continuous Speech Recognition System." Neural Information Processing Systems, 1992.Markdown
[Konig et al. "Modeling Consistency in a Speaker Independent Continuous Speech Recognition System." Neural Information Processing Systems, 1992.](https://mlanthology.org/neurips/1992/konig1992neurips-modeling/)BibTeX
@inproceedings{konig1992neurips-modeling,
title = {{Modeling Consistency in a Speaker Independent Continuous Speech Recognition System}},
author = {Konig, Yochai and Morgan, Nelson and Wooters, Chuck and Abrash, Victor and Cohen, Michael and Franco, Horacio},
booktitle = {Neural Information Processing Systems},
year = {1992},
pages = {682-687},
url = {https://mlanthology.org/neurips/1992/konig1992neurips-modeling/}
}