Duplex Sequence-to-Sequence Learning for Reversible Machine Translation
Abstract
Sequence-to-sequence learning naturally has two directions. How to effectively utilize supervision signals from both directions? Existing approaches either require two separate models, or a multitask-learned model but with inferior performance. In this paper, we propose REDER (Reversible Duplex Transformer), a parameter-efficient model and apply it to machine translation. Either end of REDER can simultaneously input and output a distinct language. Thus REDER enables {\em reversible machine translation} by simply flipping the input and output ends. Experiments verify that REDER achieves the first success of reversible machine translation, which helps outperform its multitask-trained baselines by up to 1.3 BLEU.
Cite
Text
Zheng et al. "Duplex Sequence-to-Sequence Learning for Reversible Machine Translation." Neural Information Processing Systems, 2021.Markdown
[Zheng et al. "Duplex Sequence-to-Sequence Learning for Reversible Machine Translation." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/zheng2021neurips-duplex/)BibTeX
@inproceedings{zheng2021neurips-duplex,
title = {{Duplex Sequence-to-Sequence Learning for Reversible Machine Translation}},
author = {Zheng, Zaixiang and Zhou, Hao and Huang, Shujian and Chen, Jiajun and Xu, Jingjing and Li, Lei},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/zheng2021neurips-duplex/}
}