ComSL: A Composite Speech-Language Model for End-to-End Speech-to-Text Translation

Abstract

Joint speech-language training is challenging due to the large demand for training data and GPU consumption, as well as the modality gap between speech and language. We present ComSL, a speech-language model built atop a composite architecture of public pre-trained speech-only and language-only models and optimized data-efficiently for spoken language tasks. Particularly, we propose to incorporate cross-modality learning into transfer learning and conduct them simultaneously for downstream tasks in a multi-task learning manner. Our approach has demonstrated effectiveness in end-to-end speech-to-text translation tasks, achieving a new state-of-the-art average BLEU score of 31.5 on the multilingual speech to English text translation task for 21 languages, as measured on the public CoVoST2 evaluation set.

Cite

Text

Le et al. "ComSL: A Composite Speech-Language Model for End-to-End Speech-to-Text Translation." Neural Information Processing Systems, 2023.

Markdown

[Le et al. "ComSL: A Composite Speech-Language Model for End-to-End Speech-to-Text Translation." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/le2023neurips-comsl/)

BibTeX

@inproceedings{le2023neurips-comsl,
  title     = {{ComSL: A Composite Speech-Language Model for End-to-End Speech-to-Text Translation}},
  author    = {Le, Chenyang and Qian, Yao and Zhou, Long and Liu, Shujie and Qian, Yanmin and Zeng, Michael and Huang, Xuedong},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/le2023neurips-comsl/}
}