Position: When Incentives Backfire, Data Stops Being Human
Abstract
Progress in AI has relied on human-generated data, from annotator marketplaces to the wider Internet. However, the widespread use of large language models now threatens the quality and integrity of human-generated data on these very platforms. We argue that this issue goes beyond the immediate challenge of filtering AI-generated content – it reveals deeper flaws in how data collection systems are designed. Existing systems often prioritize speed, scale, and efficiency at the cost of intrinsic human motivation, leading to declining engagement and data quality. We propose that rethinking data collection systems to align with contributors’ intrinsic motivations – rather than relying solely on external incentives – can help sustain high-quality data sourcing at scale while maintaining contributor trust and long-term participation.
Cite
Text
Santy et al. "Position: When Incentives Backfire, Data Stops Being Human." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Santy et al. "Position: When Incentives Backfire, Data Stops Being Human." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/santy2025icml-position/)BibTeX
@inproceedings{santy2025icml-position,
title = {{Position: When Incentives Backfire, Data Stops Being Human}},
author = {Santy, Sebastin and Bhattacharya, Prasanta and Ribeiro, Manoel Horta and Allen, Kelsey R and Oh, Sewoong},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {82151-82165},
volume = {267},
url = {https://mlanthology.org/icml/2025/santy2025icml-position/}
}