Siege: Multi-Turn Jailbreaking of Large Language Models with Tree Search
Abstract
We introduce Siege, a multi-turn adversarial framework that models the gradual erosion of Large Language Model (LLM) safety through a tree search perspective. Unlike single-turn jailbreaks that rely on one meticulously engineered prompt, Siege expands the conversation at each turn in a breadth-first fashion, branching out multiple adversarial prompts that exploit partial compliance from previous responses. By tracking these incremental policy leaks and reinjecting them into subsequent queries, Siege reveals how minor concessions can accumulate into fully disallowed outputs. Evaluations on the JailbreakBench dataset show that Siege achieves a 100% success rate on GPT-3.5-turbo and 97% on GPT-4 in a single multi-turn run, using fewer queries than baselines such as Crescendo or GOAT. This tree search methodology offers an in-depth view of how model safeguards degrade over successive dialogue turns, underscoring the urgency of robust multi-turn testing procedures for language models.
Cite
Text
Zhou and Arel. "Siege: Multi-Turn Jailbreaking of Large Language Models with Tree Search." ICLR 2025 Workshops: BuildingTrust, 2025.Markdown
[Zhou and Arel. "Siege: Multi-Turn Jailbreaking of Large Language Models with Tree Search." ICLR 2025 Workshops: BuildingTrust, 2025.](https://mlanthology.org/iclrw/2025/zhou2025iclrw-siege/)BibTeX
@inproceedings{zhou2025iclrw-siege,
title = {{Siege: Multi-Turn Jailbreaking of Large Language Models with Tree Search}},
author = {Zhou, Andy and Arel, Ron},
booktitle = {ICLR 2025 Workshops: BuildingTrust},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/zhou2025iclrw-siege/}
}