One Step Closer to Unbiased Aleatoric Uncertainty Estimation
Abstract
Neural networks are powerful tools in various applications, and quantifying their uncertainty is crucial for reliable decision-making. In the deep learning field, the uncertainties are usually categorized into aleatoric (data) and epistemic (model) uncertainty. In this paper, we point out that the existing popular variance attenuation method highly overestimates aleatoric uncertainty. To address this issue, we proposed a new estimation method by actively de-noising the observed data. By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.
Cite
Text
Zhang et al. "One Step Closer to Unbiased Aleatoric Uncertainty Estimation." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I15.29627Markdown
[Zhang et al. "One Step Closer to Unbiased Aleatoric Uncertainty Estimation." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/zhang2024aaai-one/) doi:10.1609/AAAI.V38I15.29627BibTeX
@inproceedings{zhang2024aaai-one,
title = {{One Step Closer to Unbiased Aleatoric Uncertainty Estimation}},
author = {Zhang, Wang and Ma, Ziwen Martin and Das, Subhro and Weng, Tsui-Wei Lily and Megretski, Alexandre and Daniel, Luca and Nguyen, Lam M.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {16857-16864},
doi = {10.1609/AAAI.V38I15.29627},
url = {https://mlanthology.org/aaai/2024/zhang2024aaai-one/}
}