Against the Achilles' Heel: A Survey on Red Teaming for Generative Models

Abstract

Generative models are rapidly gaining popularity and being integrated into everyday applications, raising concerns over their safe use as various vulnerabilities are exposed. In light of this, the field of red teaming is undergoing fast-paced growth, highlighting the need for a comprehensive survey covering the entire pipeline and addressing emerging topics. Our extensive survey, which examines over 120 papers, introduces a taxonomy of fine-grained attack strategies grounded in the inherent capabilities of language models. Additionally, we have developed the “searcher” framework to unify various automatic red teaming approaches. Moreover, our survey covers novel areas including multimodal attacks and defenses, risks around LLM-based agents, overkill of harmless queries, and the balance between harmlessness and helpfulness. Warning: This paper contains examples that may be offensive, harmful, or biased.

Cite

Text

Lin et al. "Against the Achilles' Heel: A Survey on Red Teaming for Generative Models." Journal of Artificial Intelligence Research, 2025. doi:10.1613/JAIR.1.17654

Markdown

[Lin et al. "Against the Achilles' Heel: A Survey on Red Teaming for Generative Models." Journal of Artificial Intelligence Research, 2025.](https://mlanthology.org/jair/2025/lin2025jair-against/) doi:10.1613/JAIR.1.17654

BibTeX

@article{lin2025jair-against,
  title     = {{Against the Achilles' Heel: A Survey on Red Teaming for Generative Models}},
  author    = {Lin, Lizhi and Mu, Honglin and Zhai, Zenan and Wang, Minghan and Wang, Yuxia and Wang, Renxi and Gao, Junjie and Zhang, Yixuan and Che, Wanxiang and Baldwin, Timothy and Han, Xudong and Li, Haonan},
  journal   = {Journal of Artificial Intelligence Research},
  year      = {2025},
  pages     = {687-775},
  doi       = {10.1613/JAIR.1.17654},
  volume    = {82},
  url       = {https://mlanthology.org/jair/2025/lin2025jair-against/}
}