Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game

Abstract

While Large Language Models (LLMs) are increasingly being used in real-world applications, they remain vulnerable to *prompt injection attacks*: malicious third party prompts that subvert the intent of the system designer. To help researchers study this problem, we present a dataset of over 563,000 prompt injection attacks and 118,000 prompt-based "defenses" against prompt injection, all created by players of an online game called Tensor Trust. To the best of our knowledge, this is the first dataset that includes both human-generated attacks and defenses for instruction-following LLMs. The attacks in our dataset have easily interpretable structure, and shed light on the weaknesses of LLMs. We also use the dataset to create a benchmark for resistance to two types of prompt injection, which we refer to as *prompt extraction* and *prompt hijacking*. Our benchmark results show that many models are vulnerable to the attack strategies in the Tensor Trust dataset. Furthermore, we show that some attack strategies from the dataset generalize to deployed LLM-based applications, even though they have a very different set of constraints to the game. We release data and code at [tensortrust.ai/paper](https://tensortrust.ai/paper)

Cite

Text

Toyer et al. "Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game." International Conference on Learning Representations, 2024.

Markdown

[Toyer et al. "Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/toyer2024iclr-tensor/)

BibTeX

@inproceedings{toyer2024iclr-tensor,
  title     = {{Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game}},
  author    = {Toyer, Sam and Watkins, Olivia and Mendes, Ethan Adrian and Svegliato, Justin and Bailey, Luke and Wang, Tiffany and Ong, Isaac and Elmaaroufi, Karim and Abbeel, Pieter and Darrell, Trevor and Ritter, Alan and Russell, Stuart},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://mlanthology.org/iclr/2024/toyer2024iclr-tensor/}
}