Recurrent Neural Networks with Auxiliary Labels for Cross-Domain Opinion Target Extraction

Abstract

Opinion target extraction is a fundamental task in opinion mining. In recent years, neural network based supervised learning methods have achieved competitive performance on this task. However, as with any supervised learning method, neural network based methods for this task cannot work well when the training data comes from a different domain than the test data. On the other hand, some rule-based unsupervised methods have shown to be robust when applied to different domains. In this work, we use rule-based unsupervised methods to create auxiliary labels and use neural network models to learn a hidden representation that works well for different domains. When this hidden representation is used for opinion target extraction, we find that it can outperform a number of strong baselines with a large margin.

Cite

Text

Ding et al. "Recurrent Neural Networks with Auxiliary Labels for Cross-Domain Opinion Target Extraction." AAAI Conference on Artificial Intelligence, 2017. doi:10.1609/AAAI.V31I1.11014

Markdown

[Ding et al. "Recurrent Neural Networks with Auxiliary Labels for Cross-Domain Opinion Target Extraction." AAAI Conference on Artificial Intelligence, 2017.](https://mlanthology.org/aaai/2017/ding2017aaai-recurrent/) doi:10.1609/AAAI.V31I1.11014

BibTeX

@inproceedings{ding2017aaai-recurrent,
  title     = {{Recurrent Neural Networks with Auxiliary Labels for Cross-Domain Opinion Target Extraction}},
  author    = {Ding, Ying and Yu, Jianfei and Jiang, Jing},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2017},
  pages     = {3436-3442},
  doi       = {10.1609/AAAI.V31I1.11014},
  url       = {https://mlanthology.org/aaai/2017/ding2017aaai-recurrent/}
}