Adapting Prediction Sets to Distribution Shifts Without Labels

Abstract

Recently there has been a surge of interest to deploy confidence set predictions rather than point predictions in machine learning. Unfortunately, the effectiveness of such prediction sets is frequently impaired by distribution shifts in practice, and the challenge is often compounded by the lack of ground truth labels at test time. Focusing on a standard set-valued prediction framework called conformal prediction (CP), this paper studies how to improve its practical performance using only unlabeled data from the shifted test domain. This is achieved by two new methods called $\texttt{ECP}$ and $\texttt{E{\small A}CP}$, whose main idea is to adjust the score function in CP according to its base model’s own uncertainty evaluation. Through extensive experiments on a number of large-scale datasets and neural network architectures, we show that our methods provide consistent improvement over existing baselines and nearly match the performance of fully supervised methods.

Cite

Text

Kasa et al. "Adapting Prediction Sets to Distribution Shifts Without Labels." Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, 2025.

Markdown

[Kasa et al. "Adapting Prediction Sets to Distribution Shifts Without Labels." Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, 2025.](https://mlanthology.org/uai/2025/kasa2025uai-adapting/)

BibTeX

@inproceedings{kasa2025uai-adapting,
  title     = {{Adapting Prediction Sets to Distribution Shifts Without Labels}},
  author    = {Kasa, Kevin and Zhang, Zhiyu and Yang, Heng and Taylor, Graham W.},
  booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence},
  year      = {2025},
  pages     = {1990-2010},
  volume    = {286},
  url       = {https://mlanthology.org/uai/2025/kasa2025uai-adapting/}
}