GenerSpeech: Towards Style Transfer for Generalizable Out-of-Domain Text-to-Speech

Abstract

Style transfer for out-of-domain (OOD) speech synthesis aims to generate speech samples with unseen style (e.g., speaker identity, emotion, and prosody) derived from an acoustic reference, while facing the following challenges: 1) The highly dynamic style features in expressive voice are difficult to model and transfer; and 2) the TTS models should be robust enough to handle diverse OOD conditions that differ from the source data. This paper proposes GenerSpeech, a text-to-speech model towards high-fidelity zero-shot style transfer of OOD custom voice. GenerSpeech decomposes the speech variation into the style-agnostic and style-specific parts by introducing two components: 1) a multi-level style adaptor to efficiently model a large range of style conditions, including global speaker and emotion characteristics, and the local (utterance, phoneme, and word-level) fine-grained prosodic representations; and 2) a generalizable content adaptor with Mix-Style Layer Normalization to eliminate style information in the linguistic content representation and thus improve model generalization. Our evaluations on zero-shot style transfer demonstrate that GenerSpeech surpasses the state-of-the-art models in terms of audio quality and style similarity. The extension studies to adaptive style transfer further show that GenerSpeech performs robustly in the few-shot data setting. Audio samples are available at \url{https://GenerSpeech.github.io/}.

Cite

Text

Huang et al. "GenerSpeech: Towards Style Transfer for Generalizable Out-of-Domain Text-to-Speech." Neural Information Processing Systems, 2022.

Markdown

[Huang et al. "GenerSpeech: Towards Style Transfer for Generalizable Out-of-Domain Text-to-Speech." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/huang2022neurips-generspeech/)

BibTeX

@inproceedings{huang2022neurips-generspeech,
  title     = {{GenerSpeech: Towards Style Transfer for Generalizable Out-of-Domain Text-to-Speech}},
  author    = {Huang, Rongjie and Ren, Yi and Liu, Jinglin and Cui, Chenye and Zhao, Zhou},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/huang2022neurips-generspeech/}
}