Human Texts Are Outliers: Detecting LLM-Generated Texts via Out-of-Distribution Detection

Abstract

The rapid advancement of large language models (LLMs) such as ChatGPT, DeepSeek, and Claude has significantly increased the presence of AI-generated text in digital communication. This trend has heightened the need for reliable detection methods to distinguish between human-authored and machine-generated content. Existing approaches both zero-shot methods and supervised classifiers largely conceptualize this task as a binary classification problem, often leading to poor generalization across domains and models. In this paper, we argue that such a binary formulation fundamentally mischaracterizes the detection task by assuming a coherent representation of human-written texts. In reality, human texts do not constitute a unified distribution, and their diversity cannot be effectively captured through limited sampling. This causes previous classifiers to memorize observed OOD characteristics rather than learn the essence of `non-ID' behavior, limiting generalization to unseen human-authored inputs. Based on this observation, we propose reframing the detection task as an out-of-distribution (OOD) detection problem, treating human-written texts as distributional outliers while machine-generated texts are in-distribution (ID) samples. To this end, we develop a detection framework using one-class learning method including DeepSVDD and HRN, and score-based learning techniques such as energy-based method, enabling robust and generalizable performance. Extensive experiments across multiple datasets validate the effectiveness of our OOD-based approach. Specifically, the OOD-based method achieves 98.3\% AUROC and AUPR with only 8.9\% FPR95 on DeepFake dataset. Moreover, we test our detection framework on multilingual, attacked, and unseen-model and -domain text settings, demonstrating the robustness and generalizability of our framework. Code will be released openly and also available in the supplementary materials.

Cite

Text

Zeng et al. "Human Texts Are Outliers: Detecting LLM-Generated Texts via Out-of-Distribution Detection." Advances in Neural Information Processing Systems, 2025.

Markdown

[Zeng et al. "Human Texts Are Outliers: Detecting LLM-Generated Texts via Out-of-Distribution Detection." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/zeng2025neurips-human/)

BibTeX

@inproceedings{zeng2025neurips-human,
  title     = {{Human Texts Are Outliers: Detecting LLM-Generated Texts via Out-of-Distribution Detection}},
  author    = {Zeng, Cong and Tang, Shengkun and Chen, Yuanzhou and Shen, Zhiqiang and Yu, Wenchao and Zhao, Xujiang and Chen, Haifeng and Cheng, Wei and Xu, Zhiqiang},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/zeng2025neurips-human/}
}