Interpreting Deep Models for Text Analysis via Optimization and Regularization Methods

Abstract

Interpreting deep neural networks is of great importance to understand and verify deep models for natural language processing (NLP) tasks. However, most existing approaches only focus on improving the performance of models but ignore their interpretability. In this work, we propose an approach to investigate the meaning of hidden neurons of the convolutional neural network (CNN) models. We first employ saliency map and optimization techniques to approximate the detected information of hidden neurons from input sentences. Then we develop regularization terms and explore words in vocabulary to interpret such detected information. Experimental results demonstrate that our approach can identify meaningful and reasonable interpretations for hidden spatial locations. Additionally, we show that our approach can describe the decision procedure of deep NLP models.

Cite

Text

Yuan et al. "Interpreting Deep Models for Text Analysis via Optimization and Regularization Methods." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.33015717

Markdown

[Yuan et al. "Interpreting Deep Models for Text Analysis via Optimization and Regularization Methods." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/yuan2019aaai-interpreting/) doi:10.1609/AAAI.V33I01.33015717

BibTeX

@inproceedings{yuan2019aaai-interpreting,
  title     = {{Interpreting Deep Models for Text Analysis via Optimization and Regularization Methods}},
  author    = {Yuan, Hao and Chen, Yongjun and Hu, Xia and Ji, Shuiwang},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {5717-5724},
  doi       = {10.1609/AAAI.V33I01.33015717},
  url       = {https://mlanthology.org/aaai/2019/yuan2019aaai-interpreting/}
}