Bridging the Gap Between Pre-Training and Fine-Tuning for End-to-End Speech Translation
Abstract
End-to-end speech translation, a hot topic in recent years, aims to translate a segment of audio into a specific language with an end-to-end model. Conventional approaches employ multi-task learning and pre-training methods for this task, but they suffer from the huge gap between pre-training and fine-tuning. To address these issues, we propose a Tandem Connectionist Encoding Network (TCEN) which bridges the gap by reusing all subnets in fine-tuning, keeping the roles of subnets consistent, and pre-training the attention module. Furthermore, we propose two simple but effective methods to guarantee the speech encoder outputs and the MT encoder inputs are consistent in terms of semantic representation and sequence length. Experimental results show that our model leads to significant improvements in En-De and En-Fr translation irrespective of the backbones.
Cite
Text
Wang et al. "Bridging the Gap Between Pre-Training and Fine-Tuning for End-to-End Speech Translation." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I05.6452Markdown
[Wang et al. "Bridging the Gap Between Pre-Training and Fine-Tuning for End-to-End Speech Translation." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/wang2020aaai-bridging/) doi:10.1609/AAAI.V34I05.6452BibTeX
@inproceedings{wang2020aaai-bridging,
title = {{Bridging the Gap Between Pre-Training and Fine-Tuning for End-to-End Speech Translation}},
author = {Wang, Chengyi and Wu, Yu and Liu, Shujie and Yang, Zhenglu and Zhou, Ming},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2020},
pages = {9161-9168},
doi = {10.1609/AAAI.V34I05.6452},
url = {https://mlanthology.org/aaai/2020/wang2020aaai-bridging/}
}