Reasoning with Memory Augmented Neural Networks for Language Comprehension
Abstract
Hypothesis testing is an important cognitive process that supports human reasoning. In this paper, we introduce a computational hypothesis testing approach based on memory augmented neural networks. Our approach involves a hypothesis testing loop that reconsiders and progressively refines a previously formed hypothesis in order to generate new hypotheses to test. We apply the proposed approach to language comprehension task by using Neural Semantic Encoders (NSE). Our NSE models achieve the state-of-the-art results showing an absolute improvement of 1.2% to 2.6% accuracy over previous results obtained by single and ensemble systems on standard machine comprehension benchmarks such as the Children's Book Test (CBT) and Who-Did-What (WDW) news article datasets.
Cite
Text
Munkhdalai and Yu. "Reasoning with Memory Augmented Neural Networks for Language Comprehension." International Conference on Learning Representations, 2017.Markdown
[Munkhdalai and Yu. "Reasoning with Memory Augmented Neural Networks for Language Comprehension." International Conference on Learning Representations, 2017.](https://mlanthology.org/iclr/2017/munkhdalai2017iclr-reasoning/)BibTeX
@inproceedings{munkhdalai2017iclr-reasoning,
title = {{Reasoning with Memory Augmented Neural Networks for Language Comprehension}},
author = {Munkhdalai, Tsendsuren and Yu, Hong},
booktitle = {International Conference on Learning Representations},
year = {2017},
url = {https://mlanthology.org/iclr/2017/munkhdalai2017iclr-reasoning/}
}