AgentHarm: A Benchmark for Measuring Harmfulness of LLM Agents
Abstract
The robustness of LLMs to jailbreak attacks, where users design prompts to circumvent safety measures and misuse model capabilities, has been studied primarily for LLMs acting as simple chatbots. Meanwhile, LLM agents---which use external tools and can execute multi-stage tasks---may pose a greater risk if misused, but their robustness remains underexplored. To facilitate research on LLM agent misuse, we propose a new benchmark called AgentHarm. The benchmark includes a diverse set of 110 explicitly malicious agent tasks (440 with augmentations), covering 11 harm categories including fraud, cybercrime, and harassment. In addition to measuring whether models refuse harmful agentic requests, scoring well on AgentHarm requires jailbroken agents to maintain their capabilities following an attack to complete a multi-step task. We evaluate a range of leading LLMs, and find (1) leading LLMs are surprisingly complaint with malicious agent requests without jailbreaking, (2) simple universal jailbreak strings can be adapted to effectively jailbreak agents, and (3) these jailbreaks enable coherent and malicious multi-step agent behavior and retain model capabilities. To enable simple and reliable evaluation of attacks and defenses for LLM-based agents, we publicly release AgentHarm at https://huggingface.co/datasets/ai-safety-institute/AgentHarm.
Cite
Text
Andriushchenko et al. "AgentHarm: A Benchmark for Measuring Harmfulness of LLM Agents." International Conference on Learning Representations, 2025.Markdown
[Andriushchenko et al. "AgentHarm: A Benchmark for Measuring Harmfulness of LLM Agents." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/andriushchenko2025iclr-agentharm/)BibTeX
@inproceedings{andriushchenko2025iclr-agentharm,
title = {{AgentHarm: A Benchmark for Measuring Harmfulness of LLM Agents}},
author = {Andriushchenko, Maksym and Souly, Alexandra and Dziemian, Mateusz and Duenas, Derek and Lin, Maxwell and Wang, Justin and Hendrycks, Dan and Zou, Andy and Kolter, J Zico and Fredrikson, Matt and Gal, Yarin and Davies, Xander},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/andriushchenko2025iclr-agentharm/}
}