BingoGuard: LLM Content Moderation Tools with Risk Levels

Abstract

Malicious content generated by large language models (LLMs) can pose varying degrees of harm. Although existing LLM-based moderators can detect harmful content, they struggle to assess risk levels and may miss lower-risk outputs. Accurate risk assessment allows platforms with different safety thresholds to tailor content filtering and rejection. In this paper, we introduce per-topic severity rubrics for 11 harmful topics and build BingoGuard, an LLM-based moderation system designed to predict both binary safety labels and severity levels. To address the lack of annotations on levels of severity, we propose a scalable generate-then-filter framework that first generates responses across different severity levels and then filters out low-quality responses. Using this framework, we create BingoGuardTrain, a training dataset with 54,897 examples covering a variety of topics, response severity, styles, and BingoGuardTest, a test set with 988 examples explicitly labeled based on our severity rubrics that enables fine-grained analysis on model behaviors on different severity levels. Our BingoGuard-8B, trained on BingoGuardTrain, achieves the state-of-the-art performance on several moderation benchmarks, including WildGuardTest and HarmBench, as well as BingoGuardTest, outperforming best public models, WildGuard, by 4.3\%. Our analysis demonstrates that incorporating severity levels into training significantly enhances detection performance and enables the model to effectively gauge the severity of harmful responses. Warning: this paper includes red-teaming examples that may be harmful in nature.

Cite

Text

Yin et al. "BingoGuard: LLM Content Moderation Tools with Risk Levels." International Conference on Learning Representations, 2025.

Markdown

[Yin et al. "BingoGuard: LLM Content Moderation Tools with Risk Levels." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/yin2025iclr-bingoguard/)

BibTeX

@inproceedings{yin2025iclr-bingoguard,
  title     = {{BingoGuard: LLM Content Moderation Tools with Risk Levels}},
  author    = {Yin, Fan and Laban, Philippe and Peng, Xiangyu and Zhou, Yilun and Mao, Yixin and Vats, Vaibhav and Ross, Linnea and Agarwal, Divyansh and Xiong, Caiming and Wu, Chien-Sheng},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/yin2025iclr-bingoguard/}
}