Getting Closer to AI Complete Question Answering: A Set of Prerequisite Real Tasks
Abstract
The recent explosion in question answering research produced a wealth of both factoid reading comprehension (RC) and commonsense reasoning datasets. Combining them presents a different kind of task: deciding not simply whether information is present in the text, but also whether a confident guess could be made for the missing information. We present QuAIL, the first RC dataset to combine text-based, world knowledge and unanswerable questions, and to provide question type annotation that would enable diagnostics of the reasoning strategies by a given QA system. QuAIL contains 15K multi-choice questions for 800 texts in 4 domains. Crucially, it offers both general and text-specific questions, unlikely to be found in pretraining data. We show that QuAIL poses substantial challenges to the current state-of-the-art systems, with a 30% drop in accuracy compared to the most similar existing dataset.
Cite
Text
Rogers et al. "Getting Closer to AI Complete Question Answering: A Set of Prerequisite Real Tasks." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I05.6398Markdown
[Rogers et al. "Getting Closer to AI Complete Question Answering: A Set of Prerequisite Real Tasks." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/rogers2020aaai-getting/) doi:10.1609/AAAI.V34I05.6398BibTeX
@inproceedings{rogers2020aaai-getting,
title = {{Getting Closer to AI Complete Question Answering: A Set of Prerequisite Real Tasks}},
author = {Rogers, Anna and Kovaleva, Olga and Downey, Matthew and Rumshisky, Anna},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2020},
pages = {8722-8731},
doi = {10.1609/AAAI.V34I05.6398},
url = {https://mlanthology.org/aaai/2020/rogers2020aaai-getting/}
}