Lessons from Red Teaming 100 Generative AI Products
Abstract
In recent years, AI red teaming has emerged as a practice for probing the safety and security of generative AI systems. Due to the nascency of the field, there is significant debate about how red teaming operations should be conducted. Based on our experience red teaming over 100 generative AI products at Microsoft, we present our internal threat model ontology and eight main lessons we have learned: 1. Understand what the system can do and where it is applied 2. You don’t have to compute gradients to break an AI system 3. AI red teaming is not safety benchmarking 4. Automation can help cover more of the risk landscape 5. The human element of AI red teaming is crucial 6. Responsible AI harms are pervasive but difficult to measure 7. LLMs amplify existing security risks and introduce new ones 8. AI safety and security will never be "solved" By sharing these qualitative insights alongside examples from our operations, we offer practical recommendations aimed at aligning red teaming efforts with real world risks. We also highlight aspects of AI red teaming that are often misunderstood and discuss open questions for the field to consider.
Cite
Text
Bullwinkel et al. "Lessons from Red Teaming 100 Generative AI Products." NeurIPS 2024 Workshops: Red_Teaming_GenAI, 2024.Markdown
[Bullwinkel et al. "Lessons from Red Teaming 100 Generative AI Products." NeurIPS 2024 Workshops: Red_Teaming_GenAI, 2024.](https://mlanthology.org/neuripsw/2024/bullwinkel2024neuripsw-lessons/)BibTeX
@inproceedings{bullwinkel2024neuripsw-lessons,
title = {{Lessons from Red Teaming 100 Generative AI Products}},
author = {Bullwinkel, Blake and Minnich, Amanda J. and Chawla, Shiven and Munoz, Gary David Lopez and Pouliot, Martin and Maxwell, Whitney and de Gruyter, Joris and Pratt, Katherine and Qi, Saphir and Chikanov, Nina and Lutz, Roman and Dheekonda, Raja Sekhar Rao and Jagdagdorj, Bolor-Erdene and Lundeen, Rich and Vaughan, Sam and Westerhoff, Victoria and Bryan, Pete and Kumar, Ram Shankar Siva and Zunger, Yonatan and Russinovich, Mark},
booktitle = {NeurIPS 2024 Workshops: Red_Teaming_GenAI},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/bullwinkel2024neuripsw-lessons/}
}