AdvPrompter: Fast Adaptive Adversarial Prompting for LLMs
Abstract
Large Language Models (LLMs) are vulnerable to jailbreaking attacks that lead to generation of inappropriate or harmful content. Manual red-teaming requires a time-consuming search for adversarial prompts, whereas automatic adversarial prompt generation often leads to semantically meaningless attacks that do not scale well. In this paper, we present a novel method that uses another LLM, called AdvPrompter, to generate human-readable adversarial prompts in seconds. AdvPrompter, which is trained using an alternating optimization algorithm, generates suffixes that veil the input instruction without changing its meaning, such that the TargetLLM is lured to give a harmful response. Experimental results on popular open source TargetLLM show highly competitive results on the AdvBench and HarmBench datasets, that also transfer to closed-source black-box LLMs. We also show that training on adversarial suffixes generated by AdvPrompter is a promising strategy for improving the robustness of LLMs to jailbreaking attacks.
Cite
Text
Paulus et al. "AdvPrompter: Fast Adaptive Adversarial Prompting for LLMs." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Paulus et al. "AdvPrompter: Fast Adaptive Adversarial Prompting for LLMs." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/paulus2025icml-advprompter/)BibTeX
@inproceedings{paulus2025icml-advprompter,
title = {{AdvPrompter: Fast Adaptive Adversarial Prompting for LLMs}},
author = {Paulus, Anselm and Zharmagambetov, Arman and Guo, Chuan and Amos, Brandon and Tian, Yuandong},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {48439-48469},
volume = {267},
url = {https://mlanthology.org/icml/2025/paulus2025icml-advprompter/}
}