Agreement on Target-Bidirectional Recurrent Neural Networks for Sequence-to-Sequence Learning
Abstract
Recurrent neural networks are extremely appealing for sequence-to-sequence learning tasks. Despite their great success, they typically suffer from a shortcoming: they are prone to generate unbalanced targets with good prefixes but bad suffixes, and thus performance suffers when dealing with long sequences. We propose a simple yet effective approach to overcome this shortcoming. Our approach relies on the agreement between a pair of target-directional RNNs, which generates more balanced targets. In addition, we develop two efficient approximate search methods for agreement that are empirically shown to be almost optimal in terms of either sequence level or non-sequence level metrics. Extensive experiments were performed on three standard sequence-to-sequence transduction tasks: machine transliteration, grapheme-to-phoneme transformation and machine translation. The results show that the proposed approach achieves consistent and substantial improvements, compared to many state-of-the-art systems.
Cite
Text
Liu et al. "Agreement on Target-Bidirectional Recurrent Neural Networks for Sequence-to-Sequence Learning." Journal of Artificial Intelligence Research, 2020. doi:10.1613/JAIR.1.12008Markdown
[Liu et al. "Agreement on Target-Bidirectional Recurrent Neural Networks for Sequence-to-Sequence Learning." Journal of Artificial Intelligence Research, 2020.](https://mlanthology.org/jair/2020/liu2020jair-agreement/) doi:10.1613/JAIR.1.12008BibTeX
@article{liu2020jair-agreement,
title = {{Agreement on Target-Bidirectional Recurrent Neural Networks for Sequence-to-Sequence Learning}},
author = {Liu, Lemao and Finch, Andrew M. and Utiyama, Masao and Sumita, Eiichiro},
journal = {Journal of Artificial Intelligence Research},
year = {2020},
pages = {581-606},
doi = {10.1613/JAIR.1.12008},
volume = {67},
url = {https://mlanthology.org/jair/2020/liu2020jair-agreement/}
}