Understanding the Role of Self Attention for Efficient Speech Recognition

Abstract

Self-attention (SA) is a critical component of Transformer neural networks that have succeeded in automatic speech recognition (ASR). In this paper, we analyze the role of SA in Transformer-based ASR models for not only understanding the mechanism of improved recognition accuracy but also lowering the computational complexity. We reveal that SA performs two distinct roles: phonetic and linguistic localization. Especially, we show by experiments that phonetic localization in the lower layers extracts phonologically meaningful features from speech and reduces the phonetic variance in the utterance for proper linguistic localization in the upper layers. From this understanding, we discover that attention maps can be reused as long as their localization capability is preserved. To evaluate this idea, we implement the layer-wise attention map reuse on real GPU platforms and achieve up to 1.96 times speedup in inference and 33% savings in training time with noticeably improved ASR performance for the challenging benchmark on LibriSpeech dev/test-other dataset.

Cite

Text

Shim et al. "Understanding the Role of Self Attention for Efficient Speech Recognition." International Conference on Learning Representations, 2022.

Markdown

[Shim et al. "Understanding the Role of Self Attention for Efficient Speech Recognition." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/shim2022iclr-understanding/)

BibTeX

@inproceedings{shim2022iclr-understanding,
  title     = {{Understanding the Role of Self Attention for Efficient Speech Recognition}},
  author    = {Shim, Kyuhong and Choi, Jungwook and Sung, Wonyong},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/shim2022iclr-understanding/}
}