MixSpeech: Cross-Modality Self-Learning with Audio-Visual Stream Mixup for Visual Speech Translation and Recognition

Abstract

Multi-media communications facilitate global interaction among people. However, despite researchers exploring cross-lingual translation techniques such as machine translation and audio speech translation to overcome language barriers, there is still a shortage of cross-lingual studies on visual speech. This lack of research is mainly due to the absence of datasets containing visual speech and translated text pairs. In this paper, we present AVMuST-TED, the first dataset for Audio-Visual Multilingual Speech Translation, derived from TED talks. Nonetheless, visual speech is not as distinguishable as audio speech, making it difficult to develop a mapping from source speech phonemes to the target language text. To address this issue, we propose MixSpeech, a cross-modality self-learning framework that utilizes audio speech to regularize the training of visual speech tasks. To further minimize the cross-modality gap and its impact on knowledge transfer, we suggest adopting mixed speech, which is created by interpolating audio and visual streams, along with a curriculum learning strategy to adjust the mixing ratio as needed. MixSpeech enhances speech translation in noisy environments, improving BLEU scores for four languages on AVMuST-TED by +1.4 to +4.2. Moreover, it achieves state-of-the-art performance in lip reading on CMLR (11.1%), LRS2 (25.5%), and LRS3 (28.0%).

Cite

Text

Cheng et al. "MixSpeech: Cross-Modality Self-Learning with Audio-Visual Stream Mixup for Visual Speech Translation and Recognition." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.01442

Markdown

[Cheng et al. "MixSpeech: Cross-Modality Self-Learning with Audio-Visual Stream Mixup for Visual Speech Translation and Recognition." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/cheng2023iccv-mixspeech/) doi:10.1109/ICCV51070.2023.01442

BibTeX

@inproceedings{cheng2023iccv-mixspeech,
  title     = {{MixSpeech: Cross-Modality Self-Learning with Audio-Visual Stream Mixup for Visual Speech Translation and Recognition}},
  author    = {Cheng, Xize and Jin, Tao and Huang, Rongjie and Li, Linjun and Lin, Wang and Wang, Zehan and Wang, Ye and Liu, Huadai and Yin, Aoxiong and Zhao, Zhou},
  booktitle = {International Conference on Computer Vision},
  year      = {2023},
  pages     = {15735-15745},
  doi       = {10.1109/ICCV51070.2023.01442},
  url       = {https://mlanthology.org/iccv/2023/cheng2023iccv-mixspeech/}
}