The Pitfalls of Memorization: When Memorization Hurts Generalization

Abstract

Neural networks often learn simple explanations that fit the majority of the data while memorizing exceptions that deviate from these explanations. This behavior leads to poor generalization when the learned explanations rely on spurious correlations. In this work, we formalize $\textit{the interplay between memorization and generalization}$, showing that spurious correlations would particularly lead to poor generalization when are combined with memorization. Memorization can reduce training loss to zero, leaving no incentive to learn robust, generalizable patterns. To address this, we propose $\textit{memorization-aware training}$ (MAT), which uses held-out predictions as a signal of memorization to shift a model's logits. MAT encourages learning robust patterns invariant across distributions, improving generalization under distribution shifts.

Cite

Text

Bayat et al. "The Pitfalls of Memorization: When Memorization Hurts Generalization." International Conference on Learning Representations, 2025.

Markdown

[Bayat et al. "The Pitfalls of Memorization: When Memorization Hurts Generalization." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/bayat2025iclr-pitfalls/)

BibTeX

@inproceedings{bayat2025iclr-pitfalls,
  title     = {{The Pitfalls of Memorization: When Memorization Hurts Generalization}},
  author    = {Bayat, Reza and Pezeshki, Mohammad and Dohmatob, Elvis and Lopez-Paz, David and Vincent, Pascal},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/bayat2025iclr-pitfalls/}
}