Guided-TTS: A Diffusion Model for Text-to-Speech via Classifier Guidance

Abstract

We propose Guided-TTS, a high-quality text-to-speech (TTS) model that does not require any transcript of target speaker using classifier guidance. Guided-TTS combines an unconditional diffusion probabilistic model with a separately trained phoneme classifier for classifier guidance. Our unconditional diffusion model learns to generate speech without any context from untranscribed speech data. For TTS synthesis, we guide the generative process of the diffusion model with a phoneme classifier trained on a large-scale speech recognition dataset. We present a norm-based scaling method that reduces the pronunciation errors of classifier guidance in Guided-TTS. We show that Guided-TTS achieves a performance comparable to that of the state-of-the-art TTS model, Grad-TTS, without any transcript for LJSpeech. We further demonstrate that Guided-TTS performs well on diverse datasets including a long-form untranscribed dataset.

Cite

Text

Kim et al. "Guided-TTS: A Diffusion Model for Text-to-Speech via Classifier Guidance." International Conference on Machine Learning, 2022.

Markdown

[Kim et al. "Guided-TTS: A Diffusion Model for Text-to-Speech via Classifier Guidance." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/kim2022icml-guidedtts/)

BibTeX

@inproceedings{kim2022icml-guidedtts,
  title     = {{Guided-TTS: A Diffusion Model for Text-to-Speech via Classifier Guidance}},
  author    = {Kim, Heeseung and Kim, Sungwon and Yoon, Sungroh},
  booktitle = {International Conference on Machine Learning},
  year      = {2022},
  pages     = {11119-11133},
  volume    = {162},
  url       = {https://mlanthology.org/icml/2022/kim2022icml-guidedtts/}
}