Faithful Heteroscedastic Regression with Neural Networks

Abstract

Heteroscedastic regression models a Gaussian variable’s mean and variance as a function of covariates. Parametric methods that employ neural networks for these parameter maps can capture complex relationships in the data. Yet, optimizing network parameters via log likelihood gradients can yield suboptimal mean and uncalibrated variance estimates. Current solutions side-step this optimization problem with surrogate objectives or Bayesian treatments. Instead, we make two simple modifications to optimization. Notably, their combination produces a heteroscedastic model with mean estimates that are provably as accurate as those from its homoscedastic counterpart (i.e. fitting the mean under squared error loss). For a wide variety of network and task complexities, we find that mean estimates from existing heteroscedastic solutions can be significantly less accurate than those from an equivalently expressive mean-only model. Our approach provably retains the accuracy of an equally flexible mean-only model while also offering best-in-class variance calibration. Lastly, we show how to leverage our method to recover the underlying heteroscedastic noise variance.

Cite

Text

Stirn et al. "Faithful Heteroscedastic Regression with Neural Networks." Artificial Intelligence and Statistics, 2023.

Markdown

[Stirn et al. "Faithful Heteroscedastic Regression with Neural Networks." Artificial Intelligence and Statistics, 2023.](https://mlanthology.org/aistats/2023/stirn2023aistats-faithful/)

BibTeX

@inproceedings{stirn2023aistats-faithful,
  title     = {{Faithful Heteroscedastic Regression with Neural Networks}},
  author    = {Stirn, Andrew and Wessels, Harm and Schertzer, Megan and Pereira, Laura and Sanjana, Neville and Knowles, David},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2023},
  pages     = {5593-5613},
  volume    = {206},
  url       = {https://mlanthology.org/aistats/2023/stirn2023aistats-faithful/}
}