Adjusting Regression Models for Conditional Uncertainty Calibration

Abstract

Conformal Prediction methods have finite-sample distributional-free marginal coverage guarantees. However, they generally do not offer conditional coverage guarantees, which can be important for high-stakes decisions. In this paper, we propose a novel algorithm to train a regression function to improve the conditional coverage after applying the split conformal prediction procedure. We establish an upper bound for the miscoverage gap between the conditional coverage and the nominal coverage rate and propose an end-to-end algorithm to control this upper bound. We demonstrate the efficacy of our method empirically on synthetic and real-world datasets.

Cite

Text

Gao et al. "Adjusting Regression Models for Conditional Uncertainty Calibration." Machine Learning, 2024. doi:10.1007/S10994-024-06627-7

Markdown

[Gao et al. "Adjusting Regression Models for Conditional Uncertainty Calibration." Machine Learning, 2024.](https://mlanthology.org/mlj/2024/gao2024mlj-adjusting/) doi:10.1007/S10994-024-06627-7

BibTeX

@article{gao2024mlj-adjusting,
  title     = {{Adjusting Regression Models for Conditional Uncertainty Calibration}},
  author    = {Gao, Ruijiang and Yin, Mingzhang and McInerney, James and Kallus, Nathan},
  journal   = {Machine Learning},
  year      = {2024},
  pages     = {8347-8370},
  doi       = {10.1007/S10994-024-06627-7},
  volume    = {113},
  url       = {https://mlanthology.org/mlj/2024/gao2024mlj-adjusting/}
}