Context-Aware Self-Attention Networks

Abstract

Self-attention model has shown its flexibility in parallel computation and the effectiveness on modeling both long- and short-term dependencies. However, it calculates the dependencies between representations without considering the contextual information, which has proven useful for modeling dependencies among neural representations in various natural language tasks. In this work, we focus on improving self-attention networks through capturing the richness of context. To maintain the simplicity and flexibility of the self-attention networks, we propose to contextualize the transformations of the query and key layers, which are used to calculate the relevance between elements. Specifically, we leverage the internal representations that embed both global and deep contexts, thus avoid relying on external resources. Experimental results on WMT14 English⇒German and WMT17 Chinese⇒English translation tasks demonstrate the effectiveness and universality of the proposed methods. Furthermore, we conducted extensive analyses to quantify how the context vectors participate in the self-attention model.

Cite

Text

Yang et al. "Context-Aware Self-Attention Networks." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.3301387

Markdown

[Yang et al. "Context-Aware Self-Attention Networks." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/yang2019aaai-context/) doi:10.1609/AAAI.V33I01.3301387

BibTeX

@inproceedings{yang2019aaai-context,
  title     = {{Context-Aware Self-Attention Networks}},
  author    = {Yang, Baosong and Li, Jian and Wong, Derek F. and Chao, Lidia S. and Wang, Xing and Tu, Zhaopeng},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {387-394},
  doi       = {10.1609/AAAI.V33I01.3301387},
  url       = {https://mlanthology.org/aaai/2019/yang2019aaai-context/}
}