TransVIP: Speech to Speech Translation System with Voice and Isochrony Preservation

Abstract

There is a rising interest and trend in research towards directly translating speech from one language to another, known as end-to-end speech-to-speech translation. However, most end-to-end models struggle to outperform cascade models, i.e., a pipeline framework by concatenating speech recognition, machine translation and text-to-speech models. The primary challenges stem from the inherent complexities involved in direct translation tasks and the scarcity of data. In this study, we introduce a novel model framework TransVIP that leverages diverse datasets in a cascade fashion yet facilitates end-to-end inference through joint probability. Furthermore, we propose two separated encoders to preserve the speaker’s voice characteristics and isochrony from the source speech during the translation process, making it highly suitable for scenarios such as video dubbing. Our experiments on the French-English language pair demonstrate that our model outperforms the current state-of-the-art speech-to-speech translation model.

Cite

Text

Le et al. "TransVIP: Speech to Speech Translation System with Voice and Isochrony Preservation." Neural Information Processing Systems, 2024. doi:10.52202/079017-2847

Markdown

[Le et al. "TransVIP: Speech to Speech Translation System with Voice and Isochrony Preservation." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/le2024neurips-transvip/) doi:10.52202/079017-2847

BibTeX

@inproceedings{le2024neurips-transvip,
  title     = {{TransVIP: Speech to Speech Translation System with Voice and Isochrony Preservation}},
  author    = {Le, Chenyang and Qian, Yao and Wang, Dongmei and Zhou, Long and Liu, Shujie and Wang, Xiaofei and Yousefi, Midia and Qian, Yanmin and Li, Jinyu and Zhao, Sheng and Zeng, Michael},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-2847},
  url       = {https://mlanthology.org/neurips/2024/le2024neurips-transvip/}
}