ASIDE: Architectural Separation of Instructions and Data in Language Models
Abstract
Despite their remarkable performance, large language models lack elementary safety features, and this makes them susceptible to numerous malicious attacks. In particular, previous work has identified the absence of an intrinsic separation between instructions and data as a root cause for the success of prompt injection attacks. In this work, we propose an architectural change, ASIDE, that allows the model to clearly separate between instructions and data by using separate embeddings for them. Specifically, the data embedding is initialized with a rotation of the pretrained model’s embedding, prompting the model to learn to treat instructions and data differently. We demonstrate the effectiveness of our method by showing (1) greatly increased instruction-data separation scores without a loss in model capabilities and (2) competitive results on prompt injection benchmarks, even without dedicated safety training. Additionally, we study the working mechanism behind our method through an analysis of model representations.
Cite
Text
Zverev et al. "ASIDE: Architectural Separation of Instructions and Data in Language Models." ICLR 2025 Workshops: BuildingTrust, 2025.Markdown
[Zverev et al. "ASIDE: Architectural Separation of Instructions and Data in Language Models." ICLR 2025 Workshops: BuildingTrust, 2025.](https://mlanthology.org/iclrw/2025/zverev2025iclrw-aside/)BibTeX
@inproceedings{zverev2025iclrw-aside,
title = {{ASIDE: Architectural Separation of Instructions and Data in Language Models}},
author = {Zverev, Egor and Kortukov, Evgenii and Panfilov, Alexander and Tabesh, Soroush and Lapuschkin, Sebastian and Samek, Wojciech and Lampert, Christoph H.},
booktitle = {ICLR 2025 Workshops: BuildingTrust},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/zverev2025iclrw-aside/}
}