Benchmarking Uncertainty Disentanglement: Specialized Uncertainties for Specialized Tasks

Abstract

Uncertainty quantification, once a singular task, has evolved into a spectrum of tasks, including abstained prediction, out-of-distribution detection, and aleatoric uncertainty quantification. The latest goal is disentanglement: the construction of multiple estimators that are each tailored to one and only one source of uncertainty. This paper evaluates a wide spectrum of Bayesian, evidential, and deterministic methods across various uncertainty tasks on ImageNet. We find that, despite promising theoretical endeavors, disentanglement is not yet achieved in practice. Further, we reveal which uncertainty estimators excel at which specific tasks, providing insights for practitioners and guiding future research toward task-centric and disentangled uncertainty estimation methods. Our code is available at https://anonymous.4open.science/r/bud-ED1B/.

Cite

Text

Mucsányi et al. "Benchmarking Uncertainty Disentanglement: Specialized Uncertainties for Specialized Tasks." ICML 2024 Workshops: SPIGM, 2024.

Markdown

[Mucsányi et al. "Benchmarking Uncertainty Disentanglement: Specialized Uncertainties for Specialized Tasks." ICML 2024 Workshops: SPIGM, 2024.](https://mlanthology.org/icmlw/2024/mucsanyi2024icmlw-benchmarking/)

BibTeX

@inproceedings{mucsanyi2024icmlw-benchmarking,
  title     = {{Benchmarking Uncertainty Disentanglement: Specialized Uncertainties for Specialized Tasks}},
  author    = {Mucsányi, Bálint and Kirchhof, Michael and Oh, Seong Joon},
  booktitle = {ICML 2024 Workshops: SPIGM},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/mucsanyi2024icmlw-benchmarking/}
}