Uncertainty Quantification for Deep Regression Using Contextualised Normalizing Flows

Abstract

Quantifying uncertainty in deep regression models is important both for understanding the confidence of the model and for safe decision-making in high-risk domains. Existing approaches that yield prediction intervals overlook distributional information, neglecting the effect of multimodal or asymmetric distributions on decision-making. Similarly, full or approximated Bayesian methods, while yielding the predictive posterior density, demand major modifications to the model architecture and retraining. We introduce MCNF, a novel post hoc uncertainty quantification method that produces both prediction intervals and the full conditioned predictive distribution. MCNF operates on top of the underlying trained predictive model; thus, no predictive model retraining is needed. We provide experimental evidence that the MCNF-based uncertainty estimate is well calibrated, is competitive with state-of-the-art uncertainty quantification methods, and provides richer information for downstream decision-making tasks

Cite

Text

Marco et al. "Uncertainty Quantification for Deep Regression Using Contextualised Normalizing Flows." Advances in Neural Information Processing Systems, 2025.

Markdown

[Marco et al. "Uncertainty Quantification for Deep Regression Using Contextualised Normalizing Flows." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/marco2025neurips-uncertainty/)

BibTeX

@inproceedings{marco2025neurips-uncertainty,
  title     = {{Uncertainty Quantification for Deep Regression Using Contextualised Normalizing Flows}},
  author    = {Marco, Adriel Sosa and Kirwan, John D. and Toumpa, Alexia and Gerasimou, Simos},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/marco2025neurips-uncertainty/}
}