Learning to Speak from Text: Zero-Shot Multilingual Text-to-Speech with Unsupervised Text Pretraining

Abstract

While neural text-to-speech (TTS) has achieved human-like natural synthetic speech, multilingual TTS systems are limited to resource-rich languages due to the need for paired text and studio-quality audio data. This paper proposes a method for zero-shot multilingual TTS using text-only data for the target language. The use of text-only data allows the development of TTS systems for low-resource languages for which only textual resources are available, making TTS accessible to thousands of languages. Inspired by the strong cross-lingual transferability of multilingual language models, our framework first performs masked language model pretraining with multilingual text-only data. Then we train this model with a paired data in a supervised manner, while freezing a language-aware embedding layer. This allows inference even for languages not included in the paired data but present in the text-only data. Evaluation results demonstrate highly intelligible zero-shot TTS with a character error rate of less than 12% for an unseen language.

Cite

Text

Saeki et al. "Learning to Speak from Text: Zero-Shot Multilingual Text-to-Speech with Unsupervised Text Pretraining." International Joint Conference on Artificial Intelligence, 2023. doi:10.24963/IJCAI.2023/575

Markdown

[Saeki et al. "Learning to Speak from Text: Zero-Shot Multilingual Text-to-Speech with Unsupervised Text Pretraining." International Joint Conference on Artificial Intelligence, 2023.](https://mlanthology.org/ijcai/2023/saeki2023ijcai-learning/) doi:10.24963/IJCAI.2023/575

BibTeX

@inproceedings{saeki2023ijcai-learning,
  title     = {{Learning to Speak from Text: Zero-Shot Multilingual Text-to-Speech with Unsupervised Text Pretraining}},
  author    = {Saeki, Takaaki and Maiti, Soumi and Li, Xinjian and Watanabe, Shinji and Takamichi, Shinnosuke and Saruwatari, Hiroshi},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {5179-5187},
  doi       = {10.24963/IJCAI.2023/575},
  url       = {https://mlanthology.org/ijcai/2023/saeki2023ijcai-learning/}
}