YourTTS: Towards Zero-Shot Multi-Speaker TTS and Zero-Shot Voice Conversion for Everyone
Abstract
YourTTS brings the power of a multilingual approach to the task of zero-shot multi-speaker TTS. Our method builds upon the VITS model and adds several novel modifications for zero-shot multi-speaker and multilingual training. We achieved state-of-the-art (SOTA) results in zero-shot multi-speaker TTS and results comparable to SOTA in zero-shot voice conversion on the VCTK dataset. Additionally, our approach achieves promising results in a target language with a single-speaker dataset, opening possibilities for zero-shot multi-speaker TTS and zero-shot voice conversion systems in low-resource languages. Finally, it is possible to fine-tune the YourTTS model with less than 1 minute of speech and achieve state-of-the-art results in voice similarity and with reasonable quality. This is important to allow synthesis for speakers with a very different voice or recording characteristics from those seen during training.
Cite
Text
Casanova et al. "YourTTS: Towards Zero-Shot Multi-Speaker TTS and Zero-Shot Voice Conversion for Everyone." International Conference on Machine Learning, 2022.Markdown
[Casanova et al. "YourTTS: Towards Zero-Shot Multi-Speaker TTS and Zero-Shot Voice Conversion for Everyone." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/casanova2022icml-yourtts/)BibTeX
@inproceedings{casanova2022icml-yourtts,
title = {{YourTTS: Towards Zero-Shot Multi-Speaker TTS and Zero-Shot Voice Conversion for Everyone}},
author = {Casanova, Edresson and Weber, Julian and Shulby, Christopher D and Junior, Arnaldo Candido and Gölge, Eren and Ponti, Moacir A},
booktitle = {International Conference on Machine Learning},
year = {2022},
pages = {2709-2720},
volume = {162},
url = {https://mlanthology.org/icml/2022/casanova2022icml-yourtts/}
}