Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering
Abstract
Very recently, it comes to be a popular approach for answering open-domain questions by first searching question-related passages, then applying reading comprehension models to extract answers. Existing works usually extract answers from single passages independently, thus not fully make use of the multiple searched passages, especially for the some questions requiring several evidences, which can appear in different passages, to be answered. The above observations raise the problem of evidence aggregation from multiple passages. In this paper, we deal with this problem as answer re-ranking. Specifically, based on the answer candidates generated from the existing state-of-the-art QA model, we propose two different re-ranking methods, strength-based and coverage-based re-rankers, which make use of the aggregated evidences from different passages to help entail the ground-truth answer for the question. Our model achieved state-of-the-arts on three public open-domain QA datasets, Quasar-T, SearchQA and the open-domain version of TriviaQA, with about 8\% improvement on the former two datasets.
Cite
Text
Wang et al. "Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering." International Conference on Learning Representations, 2018.Markdown
[Wang et al. "Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering." International Conference on Learning Representations, 2018.](https://mlanthology.org/iclr/2018/wang2018iclr-evidence/)BibTeX
@inproceedings{wang2018iclr-evidence,
title = {{Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering}},
author = {Wang, Shuohang and Yu, Mo and Jiang, Jing and Zhang, Wei and Guo, Xiaoxiao and Chang, Shiyu and Wang, Zhiguo and Klinger, Tim and Tesauro, Gerald and Campbell, Murray},
booktitle = {International Conference on Learning Representations},
year = {2018},
url = {https://mlanthology.org/iclr/2018/wang2018iclr-evidence/}
}