Maximum Likelihood Uncertainty Estimation: Robustness to Outliers

Abstract

We benchmark the robustness of maximum likelihood based uncertainty estimation methods to outliers in training data for regression tasks. Outliers or noisy labels in training data results in degraded performances as well as incorrect estimation of uncertainty. We propose the use of a heavy-tailed distribution (Laplace distribution) to improve the robustness to outliers. This property is evaluated using standard regression benchmarks and on a high-dimensional regression task of monocular depth estimation, both containing outliers. In particular, heavy-tailed distribution based maximum likelihood provides better uncertainty estimates, better separation in uncertainty for out-of-distribution data, as well as better detection of adversarial attacks in the presence of outliers.

Cite

Text

Nair et al. "Maximum Likelihood Uncertainty Estimation: Robustness to Outliers." AAAI Conference on Artificial Intelligence, 2022. doi:10.48550/arxiv.2202.03870

Markdown

[Nair et al. "Maximum Likelihood Uncertainty Estimation: Robustness to Outliers." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/nair2022aaai-maximum/) doi:10.48550/arxiv.2202.03870

BibTeX

@inproceedings{nair2022aaai-maximum,
  title     = {{Maximum Likelihood Uncertainty Estimation: Robustness to Outliers}},
  author    = {Nair, Deebul S. and Hochgeschwender, Nico and Olivares-Méndez, Miguel A.},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2022},
  doi       = {10.48550/arxiv.2202.03870},
  url       = {https://mlanthology.org/aaai/2022/nair2022aaai-maximum/}
}