Generating Diverse Translation by Manipulating Multi-Head Attention

Abstract

Transformer model (Vaswani et al. 2017) has been widely used in machine translation tasks and obtained state-of-the-art results. In this paper, we report an interesting phenomenon in its encoder-decoder multi-head attention: different attention heads of the final decoder layer align to different word translation candidates. We empirically verify this discovery and propose a method to generate diverse translations by manipulating heads. Furthermore, we make use of these diverse translations with the back-translation technique for better data augmentation. Experiment results show that our method generates diverse translations without a severe drop in translation quality. Experiments also show that back-translation with these diverse translations could bring a significant improvement in performance on translation tasks. An auxiliary experiment of conversation response generation task proves the effect of diversity as well.

Cite

Text

Sun et al. "Generating Diverse Translation by Manipulating Multi-Head Attention." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I05.6429

Markdown

[Sun et al. "Generating Diverse Translation by Manipulating Multi-Head Attention." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/sun2020aaai-generating/) doi:10.1609/AAAI.V34I05.6429

BibTeX

@inproceedings{sun2020aaai-generating,
  title     = {{Generating Diverse Translation by Manipulating Multi-Head Attention}},
  author    = {Sun, Zewei and Huang, Shujian and Wei, Hao-Ran and Dai, Xinyu and Chen, Jiajun},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2020},
  pages     = {8976-8983},
  doi       = {10.1609/AAAI.V34I05.6429},
  url       = {https://mlanthology.org/aaai/2020/sun2020aaai-generating/}
}