Zero-Shot Detection of LLM-Generated Text via Implicit Reward Model

Abstract

Large language models (LLMs) have demonstrated remarkable capabilities across various tasks. However, their ability to generate human-like text has raised concerns about potential misuse. This underscores the need for reliable and effective methods to detect LLM-generated text. In this paper, we propose IRM, a novel zero-shot approach that leverages Implicit Reward Models for LLM-generated text detection. Such implicit reward models can be derived from publicly available instruction-tuned and base models. Previous reward-based method relies on preference construction and task-specific fine-tuning. In comparison, IRM requires neither preference collection nor additional training. We evaluate IRM on the DetectRL benchmark and demonstrate that IRM can achieve superior detection performance, outperforms existing zero-shot and supervised methods in LLM-generated text detection.

Cite

Text

Liu et al. "Zero-Shot Detection of LLM-Generated Text via Implicit Reward Model." Advances in Neural Information Processing Systems, 2025.

Markdown

[Liu et al. "Zero-Shot Detection of LLM-Generated Text via Implicit Reward Model." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/liu2025neurips-zeroshot/)

BibTeX

@inproceedings{liu2025neurips-zeroshot,
  title     = {{Zero-Shot Detection of LLM-Generated Text via Implicit Reward Model}},
  author    = {Liu, Runheng and Huang, Heyan and Xiao, Xingchen and Wu, Zhijing},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/liu2025neurips-zeroshot/}
}