Training-Conditional Coverage Bounds Under Covariate Shift

Abstract

Conformal prediction methodology has recently been extended to the covariate shift setting, where the distribution of covariates differs between training and test data. While existing results ensure that the prediction sets from these methods achieve marginal coverage above a nominal level, their coverage rate conditional on the training dataset—referred to as training-conditional coverage—remains unexplored. In this paper, we address this gap by deriving upper bounds on the tail of the training-conditional coverage distribution, offering probably approximately correct (PAC) guarantees for these methods. Our results characterize the reliability of the prediction sets in terms of the severity of distributional changes and the size of the training dataset.

Cite

Text

Pournaderi and Xiang. "Training-Conditional Coverage Bounds Under Covariate Shift." Transactions on Machine Learning Research, 2026.

Markdown

[Pournaderi and Xiang. "Training-Conditional Coverage Bounds Under Covariate Shift." Transactions on Machine Learning Research, 2026.](https://mlanthology.org/tmlr/2026/pournaderi2026tmlr-trainingconditional/)

BibTeX

@article{pournaderi2026tmlr-trainingconditional,
  title     = {{Training-Conditional Coverage Bounds Under Covariate Shift}},
  author    = {Pournaderi, Mehrdad and Xiang, Yu},
  journal   = {Transactions on Machine Learning Research},
  year      = {2026},
  url       = {https://mlanthology.org/tmlr/2026/pournaderi2026tmlr-trainingconditional/}
}