Sample Efficient Adaptive Text-to-Speech

Abstract

We present a meta-learning approach for adaptive text-to-speech (TTS) with few data. During training, we learn a multi-speaker model using a shared conditional WaveNet core and independent learned embeddings for each speaker. The aim of training is not to produce a neural network with fixed weights, which is then deployed as a TTS system. Instead, the aim is to produce a network that requires few data at deployment time to rapidly adapt to new speakers. We introduce and benchmark three strategies: (i) learning the speaker embedding while keeping the WaveNet core fixed, (ii) fine-tuning the entire architecture with stochastic gradient descent, and (iii) predicting the speaker embedding with a trained neural network encoder. The experiments show that these approaches are successful at adapting the multi-speaker neural network to new speakers, obtaining state-of-the-art results in both sample naturalness and voice similarity with merely a few minutes of audio data from new speakers.

Cite

Text

Chen et al. "Sample Efficient Adaptive Text-to-Speech." International Conference on Learning Representations, 2019.

Markdown

[Chen et al. "Sample Efficient Adaptive Text-to-Speech." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/chen2019iclr-sample/)

BibTeX

@inproceedings{chen2019iclr-sample,
  title     = {{Sample Efficient Adaptive Text-to-Speech}},
  author    = {Chen, Yutian and Assael, Yannis and Shillingford, Brendan and Budden, David and Reed, Scott and Zen, Heiga and Wang, Quan and Cobo, Luis C. and Trask, Andrew and Laurie, Ben and Gulcehre, Caglar and van den Oord, Aäron and Vinyals, Oriol and de Freitas, Nando},
  booktitle = {International Conference on Learning Representations},
  year      = {2019},
  url       = {https://mlanthology.org/iclr/2019/chen2019iclr-sample/}
}