Hybrid-EDL: Improving Evidential Deep Learning for Uncertainty Quantification on Imbalanced Data
Abstract
Uncertainty quantification is crucial for many safety-critical applications. Evidential Deep Learning (EDL) has been demonstrated to provide effective and efficient uncertainty estimates on well-curated data. Yet, the effect of class imbalance on performance remains not well understood. Since real-world data is often represented by a skewed class distribution, in this paper, we holistically study the behaviour of EDL, and further propose Hybrid-EDL by integrating data over-sampling and post-hoc calibration to boost the robustness of EDL. Extensive experiments on synthetic and real-world healthcare datasets with label distribution skew demonstrate the superiority of our Hybrid-EDL, in terms of in-domain categorical prediction and confidence estimation, as well as out-of-distribution detection. Our research closes the gap between the theory of uncertainty quantification and the practice of trustworthy applications.
Cite
Text
Xia et al. "Hybrid-EDL: Improving Evidential Deep Learning for Uncertainty Quantification on Imbalanced Data." NeurIPS 2022 Workshops: TSRML, 2022.Markdown
[Xia et al. "Hybrid-EDL: Improving Evidential Deep Learning for Uncertainty Quantification on Imbalanced Data." NeurIPS 2022 Workshops: TSRML, 2022.](https://mlanthology.org/neuripsw/2022/xia2022neuripsw-hybridedl/)BibTeX
@inproceedings{xia2022neuripsw-hybridedl,
title = {{Hybrid-EDL: Improving Evidential Deep Learning for Uncertainty Quantification on Imbalanced Data}},
author = {Xia, Tong and Han, Jing and Qendro, Lorena and Dang, Ting and Mascolo, Cecilia},
booktitle = {NeurIPS 2022 Workshops: TSRML},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/xia2022neuripsw-hybridedl/}
}