Ignore Previous Prompt: Attack Techniques for Language Models
Abstract
Transformer-based large language models (LLMs) provide a powerful foundation for natural language tasks in large-scale customer-facing applications. However, studies that explore their vulnerabilities emerging from malicious user interaction are scarce. By proposing PromptInject, a prosaic alignment framework for mask-based iterative adversarial prompt composition, we examine how GPT-3, the most widely deployed language model in production, can be easily misaligned by simple handcrafted inputs. In particular, we investigate two types of attacks -- goal hijacking and prompt leaking -- and demonstrate that even low-aptitude, but sufficiently ill-intentioned agents, can easily exploit GPT-3's stochastic nature, creating long-tail risks. The code for PromptInject is available at https://github.com/agencyenterprise/PromptInject.
Cite
Text
Perez and Ribeiro. "Ignore Previous Prompt: Attack Techniques for Language Models." NeurIPS 2022 Workshops: MLSW, 2022.Markdown
[Perez and Ribeiro. "Ignore Previous Prompt: Attack Techniques for Language Models." NeurIPS 2022 Workshops: MLSW, 2022.](https://mlanthology.org/neuripsw/2022/perez2022neuripsw-ignore/)BibTeX
@inproceedings{perez2022neuripsw-ignore,
title = {{Ignore Previous Prompt: Attack Techniques for Language Models}},
author = {Perez, Fábio and Ribeiro, Ian},
booktitle = {NeurIPS 2022 Workshops: MLSW},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/perez2022neuripsw-ignore/}
}