Tree of Attacks: Jailbreaking Black-Box LLMs Automatically

Abstract

While Large Language Models (LLMs) display versatile functionality, they continue to generate harmful, biased, and toxic content, as demonstrated by the prevalence of human-designed jailbreaks. In this work, we present Tree of Attacks with Pruning (TAP), an automated method for generating jailbreaks that only requires black-box access to the target LLM. TAP utilizes an attacker LLM to iteratively refine candidate (attack) prompts until one of the refined prompts jailbreaks the target. In addition, before sending prompts to the target, TAP assesses them and prunes the ones unlikely to result in jailbreaks, reducing the number of queries sent to the target LLM. In empirical evaluations, we observe that TAP generates prompts that jailbreak state-of-the-art LLMs (including GPT4-Turbo and GPT4o) for more than 80% of the prompts. This significantly improves upon the previous state-of-the-art black-box methods for generating jailbreaks while using a smaller number of queries than them. Furthermore, TAP is also capable of jailbreaking LLMs protected by state-of-the-art guardrails, e.g., LlamaGuard.

Cite

Text

Mehrotra et al. "Tree of Attacks: Jailbreaking Black-Box LLMs Automatically." Neural Information Processing Systems, 2024. doi:10.52202/079017-1952

Markdown

[Mehrotra et al. "Tree of Attacks: Jailbreaking Black-Box LLMs Automatically." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/mehrotra2024neurips-tree/) doi:10.52202/079017-1952

BibTeX

@inproceedings{mehrotra2024neurips-tree,
  title     = {{Tree of Attacks: Jailbreaking Black-Box LLMs Automatically}},
  author    = {Mehrotra, Anay and Zampetakis, Manolis and Kassianik, Paul and Nelson, Blaine and Anderson, Hyrum and Singer, Yaron and Karbasi, Amin},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-1952},
  url       = {https://mlanthology.org/neurips/2024/mehrotra2024neurips-tree/}
}