Cybench: A Framework for Evaluating Cybersecurity Capabilities and Risks of Language Models

Abstract

Language Model (LM) agents for cybersecurity that are capable of autonomously identifying vulnerabilities and executing exploits have potential to cause real-world impact. Policymakers, model providers, and researchers in the AI and cybersecurity communities are interested in quantifying the capabilities of such agents to help mitigate cyberrisk and investigate opportunities for penetration testing. Toward that end, we introduce Cybench, a framework for specifying cybersecurity tasks and evaluating agents on those tasks. We include 40 professional-level Capture the Flag (CTF) tasks from 4 distinct CTF competitions, chosen to be recent, meaningful, and spanning a wide range of difficulties. Each task includes its own description, starter files, and is initialized in an environment where an agent can execute commands and observe outputs. Since many tasks are beyond the capabilities of existing LM agents, we introduce subtasks for each task, which break down a task into intermediary steps for a more detailed evaluation. To evaluate agent capabilities, we construct a cybersecurity agent and evaluate 8 models: GPT-4o, OpenAI o1-preview, Claude 3 Opus, Claude 3.5 Sonnet, Mixtral 8x22b Instruct, Gemini 1.5 Pro, Llama 3 70B Chat, and Llama 3.1 405B Instruct. For the top performing models (GPT-4o and Claude 3.5 Sonnet), we further investigate performance across 4 agent scaffolds (structured bash, action-only, pseudoterminal, and web search). Without subtask guidance, agents leveraging Claude 3.5 Sonnet, GPT-4o, OpenAI o1-preview, and Claude 3 Opus successfully solved complete tasks that took human teams up to 11 minutes to solve. In comparison, the most difficult task took human teams 24 hours and 54 minutes to solve. Anonymized code and data are available at https://drive.google.com/file/d/1kp3H0pw1WMAH-Qyyn9WA0ZKmEa7Cr4D4 and https://drive.google.com/file/d/1BcTQ02BBR0m5LYTiK-tQmIK17_TxijIy.

Cite

Text

Zhang et al. "Cybench: A Framework for Evaluating Cybersecurity Capabilities and Risks of Language Models." International Conference on Learning Representations, 2025.

Markdown

[Zhang et al. "Cybench: A Framework for Evaluating Cybersecurity Capabilities and Risks of Language Models." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/zhang2025iclr-cybench/)

BibTeX

@inproceedings{zhang2025iclr-cybench,
  title     = {{Cybench: A Framework for Evaluating Cybersecurity Capabilities and Risks of Language Models}},
  author    = {Zhang, Andy K and Perry, Neil and Dulepet, Riya and Ji, Joey and Menders, Celeste and Lin, Justin W and Jones, Eliot and Hussein, Gashon and Liu, Samantha and Jasper, Donovan Julian and Peetathawatchai, Pura and Glenn, Ari and Sivashankar, Vikram and Zamoshchin, Daniel and Glikbarg, Leo and Askaryar, Derek and Yang, Haoxiang and Zhang, Aolin and Alluri, Rishi and Tran, Nathan and Sangpisit, Rinnara and Oseleononmen, Kenny O and Boneh, Dan and Ho, Daniel E. and Liang, Percy},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/zhang2025iclr-cybench/}
}