Dynamic Negative Guidance of Diffusion Models: Towards Immediate Content Removal

Abstract

The rise of highly realistic large scale generative diffusion models comes hand in hand wih public safety concerns. In addition to the risk of generating *Not-Safe-For-Work* content from models trained on large internet-scraped datasets, there is a serious concern about reproducing copyrighted material, including celebrity images and artistic styles. We introduce ***D**ynamic **N**egative **G**uidance* a theoretically grounded negative guidance scheme that can avoid the generation of unwanted content without drastically harming the diversity of the model. Our approach avoids some of the disadvantages of the widespread, yet theoretically unfounded, Negative Prompting algorithm. Our guidance scheme does not require retraining the conditional model and can therefore be applied as a temporary solution to meet customer requests until model fine-tuning is possible.

Cite

Text

Koulischer et al. "Dynamic Negative Guidance of Diffusion Models: Towards Immediate Content Removal." NeurIPS 2024 Workshops: SafeGenAi, 2024.

Markdown

[Koulischer et al. "Dynamic Negative Guidance of Diffusion Models: Towards Immediate Content Removal." NeurIPS 2024 Workshops: SafeGenAi, 2024.](https://mlanthology.org/neuripsw/2024/koulischer2024neuripsw-dynamic/)

BibTeX

@inproceedings{koulischer2024neuripsw-dynamic,
  title     = {{Dynamic Negative Guidance of Diffusion Models: Towards Immediate Content Removal}},
  author    = {Koulischer, Felix and Deleu, Johannes and Raya, Gabriel and Demeester, Thomas and Ambrogioni, Luca},
  booktitle = {NeurIPS 2024 Workshops: SafeGenAi},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/koulischer2024neuripsw-dynamic/}
}