Evaluating LLM-Contaminated Crowdsourcing Data Without Ground Truth

Abstract

The recent success of generative AI highlights the crucial role of high-quality human feedback in building trustworthy AI systems. However, the increasing use of large language models (LLMs) by crowdsourcing workers poses a significant challenge: datasets intended to reflect human input may be compromised by LLM-generated responses. Existing LLM detection approaches often rely on high-dimensional training data such as text, making them unsuitable for structured annotation tasks like multiple-choice labeling. In this work, we investigate the potential of peer prediction --- a mechanism that evaluates the information within workers' responses --- to mitigate LLM-assisted cheating in crowdsourcing with a focus on annotation tasks. Our method quantifies the correlations between worker answers while conditioning on (a subset of) LLM-generated labels available to the requester. Building on prior research, we propose a training-free scoring mechanism with theoretical guarantees under a novel model that accounts for LLM collusion. We establish conditions under which our method is effective and empirically demonstrate its robustness in detecting low-effort cheating on real-world crowdsourcing datasets.

Cite

Text

Zhang et al. "Evaluating LLM-Contaminated Crowdsourcing Data Without Ground Truth." Advances in Neural Information Processing Systems, 2025.

Markdown

[Zhang et al. "Evaluating LLM-Contaminated Crowdsourcing Data Without Ground Truth." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/zhang2025neurips-evaluating/)

BibTeX

@inproceedings{zhang2025neurips-evaluating,
  title     = {{Evaluating LLM-Contaminated Crowdsourcing Data Without Ground Truth}},
  author    = {Zhang, Yichi and Pang, Jinlong and Zhu, Zhaowei and Liu, Yang},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/zhang2025neurips-evaluating/}
}