Coarse-Grain Fine-Grain Coattention Network for Multi-Evidence Question Answering

Abstract

End-to-end neural models have made significant progress in question answering, however recent studies show that these models implicitly assume that the answer and evidence appear close together in a single document. In this work, we propose the Coarse-grain Fine-grain Coattention Network (CFC), a new question answering model that combines information from evidence across multiple documents. The CFC consists of a coarse-grain module that interprets documents with respect to the query then finds a relevant answer, and a fine-grain module which scores each candidate answer by comparing its occurrences across all of the documents with the query. We design these modules using hierarchies of coattention and self-attention, which learn to emphasize different parts of the input. On the Qangaroo WikiHop multi-evidence question answering task, the CFC obtains a new state-of-the-art result of 70.6% on the blind test set, outperforming the previous best by 3% accuracy despite not using pretrained contextual encoders.

Cite

Text

Zhong et al. "Coarse-Grain Fine-Grain Coattention Network for Multi-Evidence Question Answering." International Conference on Learning Representations, 2019.

Markdown

[Zhong et al. "Coarse-Grain Fine-Grain Coattention Network for Multi-Evidence Question Answering." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/zhong2019iclr-coarsegrain/)

BibTeX

@inproceedings{zhong2019iclr-coarsegrain,
  title     = {{Coarse-Grain Fine-Grain Coattention Network for Multi-Evidence Question Answering}},
  author    = {Zhong, Victor and Xiong, Caiming and Keskar, Nitish Shirish and Socher, Richard},
  booktitle = {International Conference on Learning Representations},
  year      = {2019},
  url       = {https://mlanthology.org/iclr/2019/zhong2019iclr-coarsegrain/}
}