A Unified View of Label Shift Estimation
Abstract
Under label shift, the label distribution $p(y)$ might change but the class-conditional distributions $p(x|y)$ do not. There are two dominant approaches for estimating the label marginal. BBSE, a moment-matching approach based on confusion matrices, is provably consistent and provides interpretable error bounds. However, a maximum likelihood estimation approach, which we call MLLS, dominates empirically. In this paper, we present a unified view of the two methods and the first theoretical characterization of MLLS. Our contributions include (i) consistency conditions for MLLS, which include calibration of the classifier and a confusion matrix invertibility condition that BBSE also requires; (ii) a unified framework, casting BBSE as roughly equivalent to MLLS for a particular choice of calibration method; and (iii) a decomposition of MLLS's finite-sample error into terms reflecting miscalibration and estimation error. Our analysis attributes BBSE's statistical inefficiency to a loss of information due to coarse calibration. Experiments on synthetic data, MNIST, and CIFAR10 support our findings.
Cite
Text
Garg et al. "A Unified View of Label Shift Estimation." Neural Information Processing Systems, 2020.Markdown
[Garg et al. "A Unified View of Label Shift Estimation." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/garg2020neurips-unified/)BibTeX
@inproceedings{garg2020neurips-unified,
title = {{A Unified View of Label Shift Estimation}},
author = {Garg, Saurabh and Wu, Yifan and Balakrishnan, Sivaraman and Lipton, Zachary},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/garg2020neurips-unified/}
}