No Free Lunch in LLM Watermarking: Trade-Offs in Watermarking Design Choices
Abstract
Advances in generative models have made it possible for AI-generated text, code, and images to mirror human-generated content in many applications. Watermarking, a technique that aims to embed information in the output of a model to verify its source, is useful for mitigating the misuse of such AI-generated content. However, we show that common design choices in LLM watermarking schemes make the resulting systems surprisingly susceptible to attack---leading to fundamental trade-offs in robustness, utility, and usability. To navigate these trade-offs, we rigorously study a set of simple yet effective attacks on common watermarking systems, and propose guidelines and defenses for LLM watermarking in practice.
Cite
Text
Pang et al. "No Free Lunch in LLM Watermarking: Trade-Offs in Watermarking Design Choices." Neural Information Processing Systems, 2024. doi:10.52202/079017-4402Markdown
[Pang et al. "No Free Lunch in LLM Watermarking: Trade-Offs in Watermarking Design Choices." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/pang2024neurips-free/) doi:10.52202/079017-4402BibTeX
@inproceedings{pang2024neurips-free,
title = {{No Free Lunch in LLM Watermarking: Trade-Offs in Watermarking Design Choices}},
author = {Pang, Qi and Hu, Shengyuan and Zheng, Wenting and Smith, Virginia},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-4402},
url = {https://mlanthology.org/neurips/2024/pang2024neurips-free/}
}