Multiple Positional Self-Attention Network for Text Classification
Abstract
Self-attention mechanisms have recently caused many concerns on Natural Language Processing (NLP) tasks. Relative positional information is important to self-attention mechanisms. We propose Faraway Mask focusing on the (2m + 1)-gram words and Scaled-Distance Mask putting the logarithmic distance punishment to avoid and weaken the self-attention of distant words respectively. To exploit different masks, we present Positional Self-Attention Layer for generating different Masked-Self-Attentions and a following Position-Fusion Layer in which fused positional information multiplies the Masked-Self-Attentions for generating sentence embeddings. To evaluate our sentence embeddings approach Multiple Positional Self-Attention Network (MPSAN), we perform the comparison experiments on sentiment analysis, semantic relatedness and sentence classification tasks. The result shows that our MPSAN outperforms state-of-the-art methods on five datasets and the test accuracy is improved by 0.81%, 0.6% on SST, CR datasets, respectively. In addition, we reduce training parameters and improve the time efficiency of MPSAN by lowering the dimension number of self-attention and simplifying fusion mechanism.
Cite
Text
Dai et al. "Multiple Positional Self-Attention Network for Text Classification." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I05.6261Markdown
[Dai et al. "Multiple Positional Self-Attention Network for Text Classification." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/dai2020aaai-multiple/) doi:10.1609/AAAI.V34I05.6261BibTeX
@inproceedings{dai2020aaai-multiple,
title = {{Multiple Positional Self-Attention Network for Text Classification}},
author = {Dai, Biyun and Li, Jinlong and Xu, Ruoyi},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2020},
pages = {7610-7617},
doi = {10.1609/AAAI.V34I05.6261},
url = {https://mlanthology.org/aaai/2020/dai2020aaai-multiple/}
}