When Majority Rules, Minority Loses: Bias Amplification of Gradient Descent

Abstract

Despite growing empirical evidence of bias amplification in machine learning, its theoretical foundations remain poorly understood. We develop a formal framework for majority-minority learning tasks, showing how standard training can favor majority groups and produce stereotypical predictors that neglect minority-specific features. Assuming population and variance imbalance, our analysis reveals three key findings: (i) the close proximity between "full-data" and stereotypical predictors, (ii) the dominance of a region where training the entire model tends to merely learn the majority traits, and (iii) a lower bound on the additional training required. Our results are illustrated through experiments in deep learning for tabular and image classification tasks.

Cite

Text

Bachoc et al. "When Majority Rules, Minority Loses: Bias Amplification of Gradient Descent." Advances in Neural Information Processing Systems, 2025.

Markdown

[Bachoc et al. "When Majority Rules, Minority Loses: Bias Amplification of Gradient Descent." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/bachoc2025neurips-majority/)

BibTeX

@inproceedings{bachoc2025neurips-majority,
  title     = {{When Majority Rules, Minority Loses: Bias Amplification of Gradient Descent}},
  author    = {Bachoc, François and Bolte, Jerome and Boustany, Ryan and Loubes, Jean-Michel},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/bachoc2025neurips-majority/}
}