Optimal Representations for Covariate Shifts

Abstract

Machine learning often experiences distribution shifts between training and testing. We introduce a simple objective whose optima are \textit{exactly all} representations on which risk minimizers are guaranteed to be robust to Bayes preserving shifts, e.g., covariate shifts. Our objective has two components. First, a representation must remain discriminative, i.e., some predictor must be able to minimize the source and target risk. Second, the representation's support should be invariant across source and target. We make this practical by designing self-supervised methods that only use unlabelled data and augmentations. Our objectives achieve SOTA on DomainBed, and give insights into the robustness of recent methods, e.g., CLIP.

Cite

Text

Dubois et al. "Optimal Representations for Covariate Shifts." NeurIPS 2021 Workshops: DistShift, 2021.

Markdown

[Dubois et al. "Optimal Representations for Covariate Shifts." NeurIPS 2021 Workshops: DistShift, 2021.](https://mlanthology.org/neuripsw/2021/dubois2021neuripsw-optimal/)

BibTeX

@inproceedings{dubois2021neuripsw-optimal,
  title     = {{Optimal Representations for Covariate Shifts}},
  author    = {Dubois, Yann and Ruan, Yangjun and Maddison, Chris J.},
  booktitle = {NeurIPS 2021 Workshops: DistShift},
  year      = {2021},
  url       = {https://mlanthology.org/neuripsw/2021/dubois2021neuripsw-optimal/}
}