Teach LLMs to Phish: Stealing Private Information from Language Models
Abstract
When large language models are trained on private data, it can be a \textit{significant} privacy risk for them to memorize and regurgitate sensitive information. In this work, we propose a new \emph{practical} data extraction attack that we call ``neural phishing''. This attack enables an adversary to target and extract sensitive or personally identifiable information (PII), e.g., credit card numbers, from a model trained on user data with upwards of $10\%$ attack success rates, at times, as high as $50\%$. Our attack assumes only that an adversary can insert as few as $10$s of benign-appearing sentences into the training dataset using only vague priors on the structure of the user data.
Cite
Text
Panda et al. "Teach LLMs to Phish: Stealing Private Information from Language Models." International Conference on Learning Representations, 2024.Markdown
[Panda et al. "Teach LLMs to Phish: Stealing Private Information from Language Models." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/panda2024iclr-teach/)BibTeX
@inproceedings{panda2024iclr-teach,
title = {{Teach LLMs to Phish: Stealing Private Information from Language Models}},
author = {Panda, Ashwinee and Choquette-Choo, Christopher A. and Zhang, Zhengming and Yang, Yaoqing and Mittal, Prateek},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/panda2024iclr-teach/}
}