Sharing Attention Weights for Fast Transformer

Abstract

Recently, the Transformer machine translation system has shown strong results by stacking attention layers on both the source and target-language sides. But the inference of this model is slow due to the heavy use of dot-product attention in auto-regressive decoding. In this paper we speed up Transformer via a fast and lightweight attention model. More specifically, we share attention weights in adjacent layers and enable the efficient re-use of hidden states in a vertical manner. Moreover, the sharing policy can be jointly learned with the MT model. We test our approach on ten WMT and NIST OpenMT tasks. Experimental results show that it yields an average of 1.3X speed-up (with almost no decrease in BLEU) on top of a state-of-the-art implementation that has already adopted a cache for fast inference. Also, our approach obtains a 1.8X speed-up when it works with the AAN model. This is even 16 times faster than the baseline with no use of the attention cache.

Cite

Text

Xiao et al. "Sharing Attention Weights for Fast Transformer." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/735

Markdown

[Xiao et al. "Sharing Attention Weights for Fast Transformer." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/xiao2019ijcai-sharing/) doi:10.24963/IJCAI.2019/735

BibTeX

@inproceedings{xiao2019ijcai-sharing,
  title     = {{Sharing Attention Weights for Fast Transformer}},
  author    = {Xiao, Tong and Li, Yinqiao and Zhu, Jingbo and Yu, Zhengtao and Liu, Tongran},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {5292-5298},
  doi       = {10.24963/IJCAI.2019/735},
  url       = {https://mlanthology.org/ijcai/2019/xiao2019ijcai-sharing/}
}