Risk Bounds on Aleatoric Uncertainty Recovery

Abstract

Quantifying aleatoric uncertainty is a challenging task in machine learning. It is important for decision making associated with data-dependent uncertainty in model outcomes. Recently, many empirical studies in modeling aleatoric uncertainty under regression settings primarily rely on either a Gaussian likelihood or moment matching. However, the performance of these methods varies for different datasets whereas discussions on their theoretical guarantees are lacking. In this work, we investigate theoretical aspects of these approaches and establish risk bounds for their estimates. We provide conditions that are sufficient to guarantee the PAC-learnablility of the aleatoric uncertainty. The study suggests that the likelihood and moment matching-based methods enjoy different types of guarantee in their risk bounds, i.e., they calibrate different aspects of the uncertainty and thus exhibit distinct properties in different regimes of the parameter space. Finally, we conduct empirical study which shows promising results and supports our theorems.

Cite

Text

Zhang et al. "Risk Bounds on Aleatoric Uncertainty Recovery." Artificial Intelligence and Statistics, 2023.

Markdown

[Zhang et al. "Risk Bounds on Aleatoric Uncertainty Recovery." Artificial Intelligence and Statistics, 2023.](https://mlanthology.org/aistats/2023/zhang2023aistats-risk/)

BibTeX

@inproceedings{zhang2023aistats-risk,
  title     = {{Risk Bounds on Aleatoric Uncertainty Recovery}},
  author    = {Zhang, Yikai and Lin, Jiahe and Li, Fengpei and Adler, Yeshaya and Rasul, Kashif and Schneider, Anderson and Nevmyvaka, Yuriy},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2023},
  pages     = {6015-6036},
  volume    = {206},
  url       = {https://mlanthology.org/aistats/2023/zhang2023aistats-risk/}
}