On Bias-Variance Alignment in Deep Models

Abstract

Classical wisdom in machine learning holds that the generalization error can be decomposed into bias and variance, and these two terms exhibit a \emph{trade-off}. However, in this paper, we show that for an ensemble of deep learning based classification models, bias and variance are \emph{aligned} at a sample level, where squared bias is approximately \emph{equal} to variance for correctly classified sample points. We present empirical evidence confirming this phenomenon in a variety of deep learning models and datasets. Moreover, we study this phenomenon from two theoretical perspectives: calibration and neural collapse. We first show theoretically that under the assumption that the models are well calibrated, we can observe the bias-variance alignment. Second, starting from the picture provided by the neural collapse theory, we show an approximate correlation between bias and variance.

Cite

Text

Chen et al. "On Bias-Variance Alignment in Deep Models." International Conference on Learning Representations, 2024.

Markdown

[Chen et al. "On Bias-Variance Alignment in Deep Models." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/chen2024iclr-biasvariance/)

BibTeX

@inproceedings{chen2024iclr-biasvariance,
  title     = {{On Bias-Variance Alignment in Deep Models}},
  author    = {Chen, Lin and Lukasik, Michal and Jitkrittum, Wittawat and You, Chong and Kumar, Sanjiv},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://mlanthology.org/iclr/2024/chen2024iclr-biasvariance/}
}