Towards Unbiased Evaluation of Detecting Unanswerable Questions in EHRSQL
Abstract
Incorporating unanswerable questions into EHR QA systems is crucial for testing the trustworthiness of a system, as providing non-existent responses can mislead doctors in their diagnoses. The EHRSQL dataset stands out as a promising benchmark because it is the only dataset that incorporates unanswerable questions in the EHR QA system alongside practical questions. However, in this work, we identify a data bias in these unanswerable questions; they can often be discerned simply by filtering with specific N-gram patterns. Such biases jeopardize the authenticity and reliability of QA system evaluations. To tackle this problem, we propose a simple debiasing method of adjusting the split between the validation and test sets to neutralize the undue influence of N-gram filtering. By experimenting on the MIMIC-III dataset, we demonstrate both the existing data bias in EHRSQL and the effectiveness of our data split strategy in mitigating this bias.
Cite
Text
Yang et al. "Towards Unbiased Evaluation of Detecting Unanswerable Questions in EHRSQL." ICLR 2024 Workshops: DPFM, 2024.Markdown
[Yang et al. "Towards Unbiased Evaluation of Detecting Unanswerable Questions in EHRSQL." ICLR 2024 Workshops: DPFM, 2024.](https://mlanthology.org/iclrw/2024/yang2024iclrw-unbiased/)BibTeX
@inproceedings{yang2024iclrw-unbiased,
title = {{Towards Unbiased Evaluation of Detecting Unanswerable Questions in EHRSQL}},
author = {Yang, Yongjin and Kim, Sihyeon and Kim, SangMook and Lee, Gyubok and Yun, Se-Young and Choi, Edward},
booktitle = {ICLR 2024 Workshops: DPFM},
year = {2024},
url = {https://mlanthology.org/iclrw/2024/yang2024iclrw-unbiased/}
}