Differentiated Attentive Representation Learning for Sentence Classification

Abstract

Attention-based models have shown to be effective in learning representations for sentence classification. They are typically equipped with multi-hop attention mechanism. However, existing multi-hop models still suffer from the problem of paying much attention to the most frequently noticed words, which might not be important to classify the current sentence. And there is a lack of explicitly effective way that helps the attention to be shifted out of a wrong part in the sentence. In this paper, we alleviate this problem by proposing a differentiated attentive learning model. It is composed of two branches of attention subnets and an example discriminator. An explicit signal with the loss information of the first attention subnet is passed on to the second one to drive them to learn different attentive preference. The example discriminator then selects the suitable attention subnet for sentence classification. Experimental results on real and synthetic datasets demonstrate the effectiveness of our model.

Cite

Text

Zhou et al. "Differentiated Attentive Representation Learning for Sentence Classification." International Joint Conference on Artificial Intelligence, 2018. doi:10.24963/IJCAI.2018/644

Markdown

[Zhou et al. "Differentiated Attentive Representation Learning for Sentence Classification." International Joint Conference on Artificial Intelligence, 2018.](https://mlanthology.org/ijcai/2018/zhou2018ijcai-differentiated/) doi:10.24963/IJCAI.2018/644

BibTeX

@inproceedings{zhou2018ijcai-differentiated,
  title     = {{Differentiated Attentive Representation Learning for Sentence Classification}},
  author    = {Zhou, Qianrong and Wang, Xiaojie and Dong, Xuan},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2018},
  pages     = {4630-4636},
  doi       = {10.24963/IJCAI.2018/644},
  url       = {https://mlanthology.org/ijcai/2018/zhou2018ijcai-differentiated/}
}