Pseudo-Calibration: Improving Predictive Uncertainty Estimation in Domain Adaptation
Abstract
Unsupervised domain adaptation (UDA) improves model accuracy in an unlabeled target domain using a labeled source domain. However, UDA models often lack calibrated predictive uncertainty on target data, posing risks in safety-critical applications. In this paper, we address this under-explored challenge with Pseudo-Calibration (PseudoCal), a novel post-hoc calibration framework. In contrast to prior approaches, we consider UDA calibration as a target-domain specific unsupervised problem rather than a \emph{covariate shift} problem across domains. With a synthesized labeled pseudo-target set that captures the structure of the real target, we turn the unsupervised calibration problem into a supervised one, readily solvable with \emph{temperature scaling}. Extensive empirical evaluation across 5 diverse UDA scenarios involving 10 UDA methods, along with unsupervised fine-tuning of foundation models such as CLIP, consistently demonstrates the superior performance of PseudoCal over alternative calibration methods. Code is available at \url{https://github.com/LHXXHB/PseudoCal}.
Cite
Text
Hu et al. "Pseudo-Calibration: Improving Predictive Uncertainty Estimation in Domain Adaptation." NeurIPS 2023 Workshops: DistShift, 2023.Markdown
[Hu et al. "Pseudo-Calibration: Improving Predictive Uncertainty Estimation in Domain Adaptation." NeurIPS 2023 Workshops: DistShift, 2023.](https://mlanthology.org/neuripsw/2023/hu2023neuripsw-pseudocalibration/)BibTeX
@inproceedings{hu2023neuripsw-pseudocalibration,
title = {{Pseudo-Calibration: Improving Predictive Uncertainty Estimation in Domain Adaptation}},
author = {Hu, Dapeng and Liang, Jian and Wang, Xinchao and Foo, Chuan-Sheng},
booktitle = {NeurIPS 2023 Workshops: DistShift},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/hu2023neuripsw-pseudocalibration/}
}