EA Reader: Enhance Attentive Reader for Cloze-Style Question Answering via Multi-Space Context Fusion
Abstract
Query-document semantic interactions are essential for the success of many cloze-style question answering models. Recently, researchers have proposed several attention-based methods to predict the answer by focusing on appropriate subparts of the context document. In this paper, we design a novel module to produce the query-aware context vector, named Multi-Space based Context Fusion (MSCF), with the following considerations: (1) interactions are applied across multiple latent semantic spaces; (2) attention is measured at bit level, not at token level. Moreover, we extend MSCF to the multi-hop architecture. This unified model is called Enhanced Attentive Reader (EA Reader). During the iterative inference process, the reader is equipped with a novel memory update rule and maintains the understanding of documents through read, update and write operations. We conduct extensive experiments on four real-world datasets. Our results demonstrate that EA Reader outperforms state-of-the-art models.
Cite
Text
Fu and Zhang. "EA Reader: Enhance Attentive Reader for Cloze-Style Question Answering via Multi-Space Context Fusion." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.33016375Markdown
[Fu and Zhang. "EA Reader: Enhance Attentive Reader for Cloze-Style Question Answering via Multi-Space Context Fusion." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/fu2019aaai-ea/) doi:10.1609/AAAI.V33I01.33016375BibTeX
@inproceedings{fu2019aaai-ea,
title = {{EA Reader: Enhance Attentive Reader for Cloze-Style Question Answering via Multi-Space Context Fusion}},
author = {Fu, Chengzhen and Zhang, Yan},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2019},
pages = {6375-6382},
doi = {10.1609/AAAI.V33I01.33016375},
url = {https://mlanthology.org/aaai/2019/fu2019aaai-ea/}
}