Universal Jailbreak Backdoors from Poisoned Human Feedback

Abstract

Reinforcement Learning from Human Feedback (RLHF) is used to align large language models to produce helpful and harmless responses. Yet, these models can be jailbroken by finding adversarial prompts that revert the model to its unaligned behavior. In this paper, we consider a new threat where an attacker poisons the RLHF data to embed a jailbreak trigger into the model as a backdoor. The trigger then acts like a universal sudo command, enabling arbitrary harmful responses without the need to search for an adversarial prompt. Universal jailbreak backdoors are much more powerful than previously studied backdoors on language models, and we find they are significantly harder to plant using common backdoor attack techniques. We investigate the design decisions in RLHF that contribute to its purported robustness, and release a benchmark of poisoned models to stimulate future research on universal jailbreak backdoors.

Cite

Text

Rando and Tramèr. "Universal Jailbreak Backdoors from Poisoned Human Feedback." International Conference on Learning Representations, 2024.

Markdown

[Rando and Tramèr. "Universal Jailbreak Backdoors from Poisoned Human Feedback." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/rando2024iclr-universal/)

BibTeX

@inproceedings{rando2024iclr-universal,
  title     = {{Universal Jailbreak Backdoors from Poisoned Human Feedback}},
  author    = {Rando, Javier and Tramèr, Florian},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://mlanthology.org/iclr/2024/rando2024iclr-universal/}
}