What Large Language Models Do Not Talk About: An Empirical Study of Moderation and Censorship Practices

Abstract

Large Language Models (LLMs) are increasingly deployed as gateways to information, yet their content moderation practices remain underexplored. This work investigates the extent to which LLMs refuse to answer or omit information when prompted on political topics. To do so, we distinguish between hard censorship (i.e., generated refusals, error messages, or canned denial responses) and soft censorship (i.e., selective omission or downplaying of key elements), which we identify in LLMs’ responses when asked to provide information on a broad range of political figures. Our analysis covers 14 state-of-the-art models from Western countries, China, and Russia, prompted in all six official United Nations (UN) languages. Our analysis suggests that although censorship is observed across the board, it is predominantly tailored to an LLM provider’s domestic audience and typically manifests as either hard censorship or soft censorship (though rarely both concurrently). These findings underscore the need for ideological and geographic diversity among publicly available LLMs, and greater transparency in LLM moderation strategies to facilitate informed user choices. All data are made freely available.

Cite

Text

Noels et al. "What Large Language Models Do Not Talk About: An Empirical Study of Moderation and Censorship Practices." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2025. doi:10.1007/978-3-032-05962-8_16

Markdown

[Noels et al. "What Large Language Models Do Not Talk About: An Empirical Study of Moderation and Censorship Practices." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2025.](https://mlanthology.org/ecmlpkdd/2025/noels2025ecmlpkdd-large/) doi:10.1007/978-3-032-05962-8_16

BibTeX

@inproceedings{noels2025ecmlpkdd-large,
  title     = {{What Large Language Models Do Not Talk About: An Empirical Study of Moderation and Censorship Practices}},
  author    = {Noels, Sander and Bied, Guillaume and Buyl, Maarten and Rogiers, Alexander and Fettach, Yousra and Lijffijt, Jefrey and De Bie, Tijl},
  booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
  year      = {2025},
  pages     = {265-281},
  doi       = {10.1007/978-3-032-05962-8_16},
  url       = {https://mlanthology.org/ecmlpkdd/2025/noels2025ecmlpkdd-large/}
}