Leashing the Inner Demons: Self-Detoxification for Language Models

Abstract

Language models (LMs) can reproduce (or amplify) toxic language seen during training, which poses a risk to their practical application. In this paper, we conduct extensive experiments to study this phenomenon. We analyze the impact of prompts, decoding strategies and training corpora on the output toxicity. Based on our findings, we propose a simple yet effective unsupervised method for language models to ``detoxify'' themselves without an additional large corpus or external discriminator. Compared to a supervised baseline, our proposed method shows better toxicity reduction with good generation quality in the generated content under multiple settings. Warning: some examples shown in the paper may contain uncensored offensive content.

Cite

Text

Xu et al. "Leashing the Inner Demons: Self-Detoxification for Language Models." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I10.21406

Markdown

[Xu et al. "Leashing the Inner Demons: Self-Detoxification for Language Models." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/xu2022aaai-leashing/) doi:10.1609/AAAI.V36I10.21406

BibTeX

@inproceedings{xu2022aaai-leashing,
  title     = {{Leashing the Inner Demons: Self-Detoxification for Language Models}},
  author    = {Xu, Canwen and He, Zexue and He, Zhankui and McAuley, Julian J.},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {11530-11537},
  doi       = {10.1609/AAAI.V36I10.21406},
  url       = {https://mlanthology.org/aaai/2022/xu2022aaai-leashing/}
}