Dynamic Speech Discrimination Using an Articulatory Model

Abstract

The author proposes a dynamic processing method corresponding to the human one. It uses an articulator model representing the tongue and the lip movements to extract speech features from speech waveforms. He uses a linear enhancement of the articulatory movements in order to estimate their target articulatory positions, when the target positions are used for vowel discrimination, the correct rate is improved fairly well For example, ten males' symmetrical vowel V1, V2V1 are analyzed and V2 discrimination correct rate is improved from 85% to 100%. Next, hearing tests show that such dynamic process corresponds well to the human auditory system.

Cite

Text

Ishizaki. "Dynamic Speech Discrimination Using an Articulatory Model." International Joint Conference on Artificial Intelligence, 1979.

Markdown

[Ishizaki. "Dynamic Speech Discrimination Using an Articulatory Model." International Joint Conference on Artificial Intelligence, 1979.](https://mlanthology.org/ijcai/1979/ishizaki1979ijcai-dynamic/)

BibTeX

@inproceedings{ishizaki1979ijcai-dynamic,
  title     = {{Dynamic Speech Discrimination Using an Articulatory Model}},
  author    = {Ishizaki, Shun},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {1979},
  pages     = {422-424},
  url       = {https://mlanthology.org/ijcai/1979/ishizaki1979ijcai-dynamic/}
}