ClariNet: Parallel Wave Generation in End-to-End Text-to-Speech
Abstract
In this work, we propose a new solution for parallel wave generation by WaveNet. In contrast to parallel WaveNet (van Oord et al., 2018), we distill a Gaussian inverse autoregressive flow from the autoregressive WaveNet by minimizing a regularized KL divergence between their highly-peaked output distributions. Our method computes the KL divergence in closed-form, which simplifies the training algorithm and provides very efficient distillation. In addition, we introduce the first text-to-wave neural architecture for speech synthesis, which is fully convolutional and enables fast end-to-end training from scratch. It significantly outperforms the previous pipeline that connects a text-to-spectrogram model to a separately trained WaveNet (Ping et al., 2018). We also successfully distill a parallel waveform synthesizer conditioned on the hidden representation in this end-to-end model.
Cite
Text
Ping et al. "ClariNet: Parallel Wave Generation in End-to-End Text-to-Speech." International Conference on Learning Representations, 2019.Markdown
[Ping et al. "ClariNet: Parallel Wave Generation in End-to-End Text-to-Speech." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/ping2019iclr-clarinet/)BibTeX
@inproceedings{ping2019iclr-clarinet,
title = {{ClariNet: Parallel Wave Generation in End-to-End Text-to-Speech}},
author = {Ping, Wei and Peng, Kainan and Chen, Jitong},
booktitle = {International Conference on Learning Representations},
year = {2019},
url = {https://mlanthology.org/iclr/2019/ping2019iclr-clarinet/}
}