Improving Detection of Watermarked Language Models

Abstract

Watermarking has recently emerged as an effective strategy for detecting the generations of large language models (LLMs). The strength of a watermark typically depends strongly on the entropy afforded by the language model and the set of input prompts. However, entropy can be quite limited in practice, especially for models that are post-trained, for example via instruction tuning or reinforcement learning from human feedback (RLHF), which makes detection based on watermarking alone challenging. In this work, we investigate whether detection can be improved by combining watermark detectors with \emph{non-watermark} ones. We explore a number of \emph{hybrid} schemes that combine the two, observing performance gains over either class of detector under a wide range of experimental conditions.

Cite

Text

Bahri and Wieting. "Improving Detection of Watermarked Language Models." Transactions on Machine Learning Research, 2026.

Markdown

[Bahri and Wieting. "Improving Detection of Watermarked Language Models." Transactions on Machine Learning Research, 2026.](https://mlanthology.org/tmlr/2026/bahri2026tmlr-improving/)

BibTeX

@article{bahri2026tmlr-improving,
  title     = {{Improving Detection of Watermarked Language Models}},
  author    = {Bahri, Dara and Wieting, John Frederick},
  journal   = {Transactions on Machine Learning Research},
  year      = {2026},
  url       = {https://mlanthology.org/tmlr/2026/bahri2026tmlr-improving/}
}