Squeezeformer: An Efficient Transformer for Automatic Speech Recognition

Abstract

The recently proposed Conformer model has become the de facto backbone model for various downstream speech tasks based on its hybrid attention-convolution architecture that captures both local and global features. However, through a series of systematic studies, we find that the Conformer architecture’s design choices are not optimal. After re-examining the design choices for both the macro and micro-architecture of Conformer, we propose Squeezeformer which consistently outperforms the state-of-the-art ASR models under the same training schemes. In particular, for the macro-architecture, Squeezeformer incorporates (i) the Temporal U-Net structure which reduces the cost of the multi-head attention modules on long sequences, and (ii) a simpler block structure of multi-head attention or convolution modules followed up by feed-forward module instead of the Macaron structure proposed in Conformer. Furthermore, for the micro-architecture, Squeezeformer (i) simplifies the activations in the convolutional block, (ii) removes redundant Layer Normalization operations, and (iii) incorporates an efficient depthwise down-sampling layer to efficiently sub-sample the input signal. Squeezeformer achieves state-of-the-art results of 7.5%, 6.5%, and 6.0% word-error-rate (WER) on LibriSpeech test-other without external language models, which are 3.1%, 1.4%, and 0.6% better than Conformer-CTC with the same number of FLOPs. Our code is open-sourced and available online.

Cite

Text

Kim et al. "Squeezeformer: An Efficient Transformer for Automatic Speech Recognition." Neural Information Processing Systems, 2022.

Markdown

[Kim et al. "Squeezeformer: An Efficient Transformer for Automatic Speech Recognition." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/kim2022neurips-squeezeformer/)

BibTeX

@inproceedings{kim2022neurips-squeezeformer,
  title     = {{Squeezeformer: An Efficient Transformer for Automatic Speech Recognition}},
  author    = {Kim, Sehoon and Gholami, Amir and Shaw, Albert and Lee, Nicholas and Mangalam, Karttikeya and Malik, Jitendra and Mahoney, Michael W. and Keutzer, Kurt},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/kim2022neurips-squeezeformer/}
}