A Holistic Approach to Undesired Content Detection in the Real World

Abstract

We present a holistic approach to building a robust and useful natural language classification system for real-world content moderation. The success of such a system relies on a chain of carefully designed and executed steps, including the design of content taxonomies and labeling instructions, data quality control, an active learning pipeline to capture rare events, and a variety of methods to make the model robust and to avoid overfitting. Our moderation system is trained to detect a broad set of categories of undesired content, including sexual content, hateful content, violence, self-harm, and harassment. This approach generalizes to a wide range of different content taxonomies and can be used to create high-quality content classifiers that outperform off-the-shelf models.

Cite

Text

Markov et al. "A Holistic Approach to Undesired Content Detection in the Real World." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I12.26752

Markdown

[Markov et al. "A Holistic Approach to Undesired Content Detection in the Real World." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/markov2023aaai-holistic/) doi:10.1609/AAAI.V37I12.26752

BibTeX

@inproceedings{markov2023aaai-holistic,
  title     = {{A Holistic Approach to Undesired Content Detection in the Real World}},
  author    = {Markov, Todor and Zhang, Chong and Agarwal, Sandhini and Nekoul, Florentine Eloundou and Lee, Theodore and Adler, Steven and Jiang, Angela and Weng, Lilian},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {15009-15018},
  doi       = {10.1609/AAAI.V37I12.26752},
  url       = {https://mlanthology.org/aaai/2023/markov2023aaai-holistic/}
}