Jailbreaking Black Box Large Language Models in Twenty Queries

Abstract

There is growing research interest in ensuring that large language models align with human safety and ethical guidelines. Adversarial attacks known as 'jailbreaks' pose a significant threat as they coax models into overriding alignment safeguards. Identifying these vulnerabilities through attacking a language model (red teaming) is instrumental in understanding inherent weaknesses and preventing misuse. We present Prompt Automatic Iterative Refinement (PAIR), which generates semantic jailbreaks with only black-box access to a language model. Empirically, PAIR often requires fewer than 20 queries, orders of magnitude fewer than prior jailbreak attacks. PAIR draws inspiration from the human process of social engineering, and employs an attacker language model to automatically generate adversarial prompts in place of a human. The attacker model uses the target model's response as additional context to iteratively refine the adversarial prompt. PAIR achieves competitive jailbreaking success rates and transferability on open and closed-source language models, including GPT-3.5/4, Vicuna, and PaLM.

Cite

Text

Chao et al. "Jailbreaking Black Box Large Language Models in Twenty Queries." NeurIPS 2023 Workshops: R0-FoMo, 2023.

Markdown

[Chao et al. "Jailbreaking Black Box Large Language Models in Twenty Queries." NeurIPS 2023 Workshops: R0-FoMo, 2023.](https://mlanthology.org/neuripsw/2023/chao2023neuripsw-jailbreaking/)

BibTeX

@inproceedings{chao2023neuripsw-jailbreaking,
  title     = {{Jailbreaking Black Box Large Language Models in Twenty Queries}},
  author    = {Chao, Patrick and Robey, Alexander and Dobriban, Edgar and Hassani, Hamed and Pappas, George J. and Wong, Eric},
  booktitle = {NeurIPS 2023 Workshops: R0-FoMo},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/chao2023neuripsw-jailbreaking/}
}