The Unreasonable Effectiveness of Deep Evidential Regression

Abstract

There is a significant need for principled uncertainty reasoning in machine learning systems as they are increasingly deployed in safety-critical domains. A new approach with uncertainty-aware regression-based neural networks (NNs), based on learning evidential distributions for aleatoric and epistemic uncertainties, shows promise over traditional deterministic methods and typical Bayesian NNs, notably with the capabilities to disentangle aleatoric and epistemic uncertainties. Despite some empirical success of Deep Evidential Regression (DER), there are important gaps in the mathematical foundation that raise the question of why the proposed technique seemingly works. We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a heuristic rather than an exact uncertainty quantification. We go on to discuss corrections and redefinitions of how aleatoric and epistemic uncertainties should be extracted from NNs.

Cite

Text

Meinert et al. "The Unreasonable Effectiveness of Deep Evidential Regression." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I8.26096

Markdown

[Meinert et al. "The Unreasonable Effectiveness of Deep Evidential Regression." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/meinert2023aaai-unreasonable/) doi:10.1609/AAAI.V37I8.26096

BibTeX

@inproceedings{meinert2023aaai-unreasonable,
  title     = {{The Unreasonable Effectiveness of Deep Evidential Regression}},
  author    = {Meinert, Nis and Gawlikowski, Jakob and Lavin, Alexander},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {9134-9142},
  doi       = {10.1609/AAAI.V37I8.26096},
  url       = {https://mlanthology.org/aaai/2023/meinert2023aaai-unreasonable/}
}