Zero-AVSR: Zero-Shot Audio-Visual Speech Recognition with LLMs by Learning Language-Agnostic Speech Representations

Abstract

We explore a novel zero-shot Audio-Visual Speech Recognition (AVSR) framework, dubbed Zero-AVSR, which enables speech recognition in target languages without requiring any audio-visual speech data in those languages. Specifically, we introduce the Audio-Visual Speech Romanizer (AV-Romanizer), which learns language-agnostic speech representations by predicting Roman text. Then, by leveraging the strong multilingual modeling capabilities of Large Language Models (LLMs), we propose converting the predicted Roman text into language-specific graphemes, forming the proposed Cascaded Zero-AVSR. Taking it a step further, we explore a unified Zero-AVSR approach by directly integrating the audio-visual speech representations encoded by the AV-Romanizer into the LLM. This is achieved through finetuning the adapter and the LLM using our proposed multi-task learning scheme. To capture the wide spectrum of phonetic and linguistic diversity, we also introduce a Multilingual Audio-Visual Romanized Corpus (MARC) consisting of 2,916 hours of audio-visual speech data across 82 languages, along with transcriptions in both language-specific graphemes and Roman text. Extensive analysis and experiments confirm that the proposed Zero-AVSR framework has the potential to expand language support beyond the languages seen during the training of the AV-Romanizer. Code is available at https://bit.ly/zero-avsr.

Cite

Text

Yeo et al. "Zero-AVSR: Zero-Shot Audio-Visual Speech Recognition with LLMs by Learning Language-Agnostic Speech Representations." International Conference on Computer Vision, 2025.

Markdown

[Yeo et al. "Zero-AVSR: Zero-Shot Audio-Visual Speech Recognition with LLMs by Learning Language-Agnostic Speech Representations." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/yeo2025iccv-zeroavsr/)

BibTeX

@inproceedings{yeo2025iccv-zeroavsr,
  title     = {{Zero-AVSR: Zero-Shot Audio-Visual Speech Recognition with LLMs by Learning Language-Agnostic Speech Representations}},
  author    = {Yeo, Jeong Hun and Kim, Minsu and Kim, Chae Won and Petridis, Stavros and Ro, Yong Man},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {6693-6703},
  url       = {https://mlanthology.org/iccv/2025/yeo2025iccv-zeroavsr/}
}