Toxicity Detection for Free

Abstract

Current LLMs are generally aligned to follow safety requirements and tend to refuse toxic prompts. However, LLMs can fail to refuse toxic prompts or be overcautious and refuse benign examples. In addition, state-of-the-art toxicity detectors have low TPRs at low FPR, incurring high costs in real-world applications where toxic examples are rare. In this paper, we introduce Moderation Using LLM Introspection (MULI), which detects toxic prompts using the information extracted directly from LLMs themselves. We found we can distinguish between benign and toxic prompts from the distribution of the first response token's logits. Using this idea, we build a robust detector of toxic prompts using a sparse logistic regression model on the first response token logits. Our scheme outperforms SOTA detectors under multiple metrics.

Cite

Text

Hu et al. "Toxicity Detection for Free." Neural Information Processing Systems, 2024. doi:10.52202/079017-0557

Markdown

[Hu et al. "Toxicity Detection for Free." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/hu2024neurips-toxicity/) doi:10.52202/079017-0557

BibTeX

@inproceedings{hu2024neurips-toxicity,
  title     = {{Toxicity Detection for Free}},
  author    = {Hu, Zhanhao and Piet, Julien and Zhao, Geng and Jiao, Jiantao and Wagner, David},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-0557},
  url       = {https://mlanthology.org/neurips/2024/hu2024neurips-toxicity/}
}