Tamper-Resistant Safeguards for Open-Weight LLMs

Abstract

Rapid advances in the capabilities of large language models (LLMs) have raised widespread concerns regarding their potential for malicious use. Open-weight LLMs present unique challenges, as existing safeguards lack robustness to tampering attacks that modify model weights. For example, recent works have demonstrated that refusal and unlearning safeguards can be trivially removed with a few steps of fine-tuning. These vulnerabilities necessitate new approaches for enabling the safe release of open-weight LLMs. We develop a method, called TAR, for building tamper-resistant safeguards into open-weight LLMs such that adversaries cannot remove the safeguards even after hundreds of steps of fine-tuning. In extensive evaluations and red teaming analyses, we find that our method greatly improves tamper-resistance while preserving benign capabilities. Our results demonstrate that progress on tamper-resistance is possible, opening up a promising new avenue to improve the safety and security of open-weight LLMs.

Cite

Text

Tamirisa et al. "Tamper-Resistant Safeguards for Open-Weight LLMs." International Conference on Learning Representations, 2025.

Markdown

[Tamirisa et al. "Tamper-Resistant Safeguards for Open-Weight LLMs." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/tamirisa2025iclr-tamperresistant/)

BibTeX

@inproceedings{tamirisa2025iclr-tamperresistant,
  title     = {{Tamper-Resistant Safeguards for Open-Weight LLMs}},
  author    = {Tamirisa, Rishub and Bharathi, Bhrugu and Phan, Long and Zhou, Andy and Gatti, Alice and Suresh, Tarun and Lin, Maxwell and Wang, Justin and Wang, Rowan and Arel, Ron and Zou, Andy and Song, Dawn and Li, Bo and Hendrycks, Dan and Mazeika, Mantas},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/tamirisa2025iclr-tamperresistant/}
}