Measuring Forgetting of Memorized Training Examples

Abstract

Machine learning models exhibit two seemingly contradictory phenomena: training data memorization and various forms of forgetting. In memorization, models overfit specific training examples and become susceptible to privacy attacks. In forgetting, examples which appeared early in training are forgotten by the end. In this work, we connect these phenomena. We propose a technique to measure to what extent models ``forget'' the specifics of training examples, becoming less susceptible to privacy attacks on examples they have not seen recently. We show that, while non-convexity can prevent forgetting from happening in the worst-case, standard image,speech, and language models empirically do forget examples over time. We identify nondeterminism as a potential explanation, showing that deterministically trained models do not forget. Our results suggest that examples seen early when training with extremely large datasets---for instance those examples used to pre-train a model---may observe privacy benefits at the expense of examples seen later.

Cite

Text

Jagielski et al. "Measuring Forgetting of Memorized Training Examples." International Conference on Learning Representations, 2023.

Markdown

[Jagielski et al. "Measuring Forgetting of Memorized Training Examples." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/jagielski2023iclr-measuring/)

BibTeX

@inproceedings{jagielski2023iclr-measuring,
  title     = {{Measuring Forgetting of Memorized Training Examples}},
  author    = {Jagielski, Matthew and Thakkar, Om and Tramer, Florian and Ippolito, Daphne and Lee, Katherine and Carlini, Nicholas and Wallace, Eric and Song, Shuang and Thakurta, Abhradeep Guha and Papernot, Nicolas and Zhang, Chiyuan},
  booktitle = {International Conference on Learning Representations},
  year      = {2023},
  url       = {https://mlanthology.org/iclr/2023/jagielski2023iclr-measuring/}
}