SAEMark: Steering Personalized Multilingual LLM Watermarks with Sparse Autoencoders
Abstract
Watermarking LLM-generated text is critical for content attribution and misinformation prevention, yet existing methods compromise text quality and require white-box model access with logit manipulation or training, which exclude API-based models and multilingual scenarios. We propose SAEMark, an **inference-time framework** for *multi-bit* watermarking that embeds personalized information through *feature-based rejection sampling*, fundamentally different from logit-based or rewriting-based approaches: we **do not modify model outputs directly** and require only **black-box access**, while naturally supporting multi-bit message embedding and generalizing across diverse languages and domains. We instantiate the framework using *Sparse Autoencoders* as deterministic feature extractors and provide theoretical worst-case analysis relating watermark accuracy to computational budget. Experiments across 4 datasets demonstrate strong watermarking performance on English, Chinese, and code while preserving text quality. SAEMark establishes a new paradigm for **scalable, quality-preserving watermarks** that work seamlessly with closed-source LLMs across languages and domains.
Cite
Text
Yu et al. "SAEMark: Steering Personalized Multilingual LLM Watermarks with Sparse Autoencoders." Advances in Neural Information Processing Systems, 2025.Markdown
[Yu et al. "SAEMark: Steering Personalized Multilingual LLM Watermarks with Sparse Autoencoders." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/yu2025neurips-saemark/)BibTeX
@inproceedings{yu2025neurips-saemark,
title = {{SAEMark: Steering Personalized Multilingual LLM Watermarks with Sparse Autoencoders}},
author = {Yu, Zhuohao and Jiang, Xingru and Gu, Weizheng and Wang, Yidong and Wen, Qingsong and Zhang, Shikun and Ye, Wei},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/yu2025neurips-saemark/}
}