End-to-End ASR: From Supervised to Semi-Supervised Learning with Modern Architectures

Abstract

We study pseudo-labeling for the semi-supervised training of ResNet, Time-Depth Separable ConvNets, and Transformers for speech recognition, with either CTC or Seq2Seq loss functions. We perform experiments on the standard Librispeech dataset, and leverage additional unlabeled data from Librivox through pseudo-labeling. We show that while Transformer-based acoustic models have superior performance with the supervised dataset alone, semi-supervision improves all models across architectures and loss functions and bridges much of the performance gaps between them. In doing so, we reach a new state-of-the-art for end-to-end acoustic models decoded with an external language model in the standard supervised learning setting, and a new absolute state-of-the-art with semi-supervised training. Finally, we study the effect of leveraging different amounts of unlabeled audio, propose several ways of evaluating the characteristics of unlabeled audio which improve acoustic modeling, and show that acoustic models trained with more audio rely less on external language models.

Cite

Text

Synnaeve et al. "End-to-End ASR: From Supervised to Semi-Supervised Learning with Modern Architectures." ICML 2020 Workshops: SAS, 2020.

Markdown

[Synnaeve et al. "End-to-End ASR: From Supervised to Semi-Supervised Learning with Modern Architectures." ICML 2020 Workshops: SAS, 2020.](https://mlanthology.org/icmlw/2020/synnaeve2020icmlw-endtoend/)

BibTeX

@inproceedings{synnaeve2020icmlw-endtoend,
  title     = {{End-to-End ASR: From Supervised to Semi-Supervised Learning with Modern Architectures}},
  author    = {Synnaeve, Gabriel and Xu, Qiantong and Kahn, Jacob and Likhomanenko, Tatiana and Grave, Edouard and Pratap, Vineel and Sriram, Anuroop and Liptchinsky, Vitaliy and Collobert, Ronan},
  booktitle = {ICML 2020 Workshops: SAS},
  year      = {2020},
  url       = {https://mlanthology.org/icmlw/2020/synnaeve2020icmlw-endtoend/}
}