Malign Overfitting: Interpolation and Invariance Are Fundamentally at Odds
Abstract
Learned classifiers should often possess certain invariance properties meant to encourage fairness, robustness, or out-of-distribution generalization. However, multiple recent works empirically demonstrate that common invariance-inducing regularizers are ineffective in the over-parameterized regime, in which classifiers perfectly fit (i.e. interpolate) the training data. This suggests that the phenomenon of ``benign overfitting," in which models generalize well despite interpolating, might not favorably extend to settings in which robustness or fairness are desirable. In this work, we provide a theoretical justification for these observations. We prove that---even in the simplest of settings---any interpolating learning rule (with an arbitrarily small margin) will not satisfy these invariance properties. We then propose and analyze an algorithm that---in the same setting---successfully learns a non-interpolating classifier that is provably invariant. We validate our theoretical observations on simulated data and the Waterbirds dataset.
Cite
Text
Wald et al. "Malign Overfitting: Interpolation and Invariance Are Fundamentally at Odds." International Conference on Learning Representations, 2023.Markdown
[Wald et al. "Malign Overfitting: Interpolation and Invariance Are Fundamentally at Odds." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/wald2023iclr-malign/)BibTeX
@inproceedings{wald2023iclr-malign,
title = {{Malign Overfitting: Interpolation and Invariance Are Fundamentally at Odds}},
author = {Wald, Yoav and Yona, Gal and Shalit, Uri and Carmon, Yair},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/wald2023iclr-malign/}
}