Deduplicating Training Data Mitigates Privacy Risks in Language Models

Abstract

Past work has shown that large language models are susceptible to privacy attacks, where adversaries generate sequences from a trained model and detect which sequences are memorized from the training set. In this work, we show that the success of these attacks is largely due to duplication in commonly used web-scraped training sets. We first show that the rate at which language models regenerate training sequences is superlinearly related to a sequence’s count in the training set. For instance, a sequence that is present 10 times in the training data is on average generated 1000x more often than a sequence that is present only once. We next show that existing methods for detecting memorized sequences have near-chance accuracy on non-duplicated training sequences. Finally, we find that after applying methods to deduplicate training data, language models are considerably more secure against these types of privacy attacks. Taken together, our results motivate an increased focus on deduplication in privacy-sensitive applications and a reevaluation of the practicality of existing privacy attacks.

Cite

Text

Kandpal et al. "Deduplicating Training Data Mitigates Privacy Risks in Language Models." International Conference on Machine Learning, 2022.

Markdown

[Kandpal et al. "Deduplicating Training Data Mitigates Privacy Risks in Language Models." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/kandpal2022icml-deduplicating/)

BibTeX

@inproceedings{kandpal2022icml-deduplicating,
  title     = {{Deduplicating Training Data Mitigates Privacy Risks in Language Models}},
  author    = {Kandpal, Nikhil and Wallace, Eric and Raffel, Colin},
  booktitle = {International Conference on Machine Learning},
  year      = {2022},
  pages     = {10697-10707},
  volume    = {162},
  url       = {https://mlanthology.org/icml/2022/kandpal2022icml-deduplicating/}
}