Employing External Rich Knowledge for Machine Comprehension

Abstract

Recently proposed machine comprehension (MC) application is an effort to deal with natural language understanding problem. However, the small size of machine comprehension labeled data confines the application of deep neural networks architectures that have shown advantage in semantic inference tasks. Previous methods use a lot of NLP tools to extract linguistic features but only gain little improvement over simple baseline. In this paper, we build an attention-based recurrent neural network model, train it with the help of external knowledge which is semantically relevant to machine comprehension, and achieves a new state-of-art result. PDF

Cite

Text

Wang et al. "Employing External Rich Knowledge for Machine Comprehension." International Joint Conference on Artificial Intelligence, 2016.

Markdown

[Wang et al. "Employing External Rich Knowledge for Machine Comprehension." International Joint Conference on Artificial Intelligence, 2016.](https://mlanthology.org/ijcai/2016/wang2016ijcai-employing/)

BibTeX

@inproceedings{wang2016ijcai-employing,
  title     = {{Employing External Rich Knowledge for Machine Comprehension}},
  author    = {Wang, Bingning and Guo, Shangmin and Liu, Kang and He, Shizhu and Zhao, Jun},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2016},
  pages     = {2929-2925},
  url       = {https://mlanthology.org/ijcai/2016/wang2016ijcai-employing/}
}