QADiver: Interactive Framework for Diagnosing QA Models

Abstract

Question answering (QA) extracting answers from text to the given question in natural language, has been actively studied and existing models have shown a promise of outperforming human performance when trained and evaluated with SQuAD dataset. However, such performance may not be replicated in the actual setting, for which we need to diagnose the cause, which is non-trivial due to the complexity of model. We thus propose a web-based UI that provides how each model contributes to QA performances, by integrating visualization and analysis tools for model explanation. We expect this framework can help QA model researchers to refine and improve their models.

Cite

Text

Lee et al. "QADiver: Interactive Framework for Diagnosing QA Models." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.33019861

Markdown

[Lee et al. "QADiver: Interactive Framework for Diagnosing QA Models." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/lee2019aaai-qadiver/) doi:10.1609/AAAI.V33I01.33019861

BibTeX

@inproceedings{lee2019aaai-qadiver,
  title     = {{QADiver: Interactive Framework for Diagnosing QA Models}},
  author    = {Lee, Gyeongbok and Kim, Sungdong and Hwang, Seung-won},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {9861-9862},
  doi       = {10.1609/AAAI.V33I01.33019861},
  url       = {https://mlanthology.org/aaai/2019/lee2019aaai-qadiver/}
}