Bias in Motion: Theoretical Insights into the Dynamics of Bias in SGD Training

Abstract

Machine learning systems often acquire biases by leveraging undesired features in the data, impacting accuracy variably across different sub-populations of the data. However, our current understanding of bias formation mostly focuses on the initial and final stages of learning, leaving a gap in knowledge regarding the transient dynamics. To address this gap, this paper explores the evolution of bias in a teacher-student setup that models different data sub-populations with a Gaussian-mixture model. We provide an analytical description of the stochastic gradient descent dynamics of a linear classifier in this setup, which we prove to be exact in high dimension.Notably, our analysis identifies different properties of the sub-populations that drive bias at different timescales and hence shows a shifting preference of our classifier during training. By applying our general solution to fairness and robustness, we delineate how and when heterogeneous data and spurious features can generate and amplify bias. We empirically validate our results in more complex scenarios by training deeper networks on synthetic and real data, i.e. using CIFAR10, MNIST, and CelebA datasets.

Cite

Text

Jain et al. "Bias in Motion: Theoretical Insights into the Dynamics of Bias in SGD Training." Neural Information Processing Systems, 2024. doi:10.52202/079017-0770

Markdown

[Jain et al. "Bias in Motion: Theoretical Insights into the Dynamics of Bias in SGD Training." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/jain2024neurips-bias/) doi:10.52202/079017-0770

BibTeX

@inproceedings{jain2024neurips-bias,
  title     = {{Bias in Motion: Theoretical Insights into the Dynamics of Bias in SGD Training}},
  author    = {Jain, Anchit and Nobahari, Rozhin and Baratin, Aristide and Mannelli, Stefano Sarao},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-0770},
  url       = {https://mlanthology.org/neurips/2024/jain2024neurips-bias/}
}