Erasing Undesirable Concepts in Diffusion Models with Adversarial Preservation

Abstract

Diffusion models excel at generating visually striking content from text but can inadvertently produce undesirable or harmful content when trained on unfiltered internet data. A practical solution is to selectively removing target concepts from the model, but this may impact the remaining concepts. Prior approaches have tried to balance this by introducing a loss term to preserve neutral content or a regularization term to minimize changes in the model parameters, yet resolving this trade-off remains challenging. In this work, we propose to identify and preserving concepts most affected by parameter changes, termed as adversarial concepts. This approach ensures stable erasure with minimal impact on the other concepts. We demonstrate the effectiveness of our method using the Stable Diffusion model, showing that it outperforms state-of-the-art erasure methods in eliminating unwanted content while maintaining the integrity of other unrelated elements. Our code is available at \url{https://github.com/tuananhbui89/Erasing-Adversarial-Preservation}.

Cite

Text

Bui et al. "Erasing Undesirable Concepts in Diffusion Models with Adversarial Preservation." Neural Information Processing Systems, 2024. doi:10.52202/079017-4230

Markdown

[Bui et al. "Erasing Undesirable Concepts in Diffusion Models with Adversarial Preservation." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/bui2024neurips-erasing/) doi:10.52202/079017-4230

BibTeX

@inproceedings{bui2024neurips-erasing,
  title     = {{Erasing Undesirable Concepts in Diffusion Models with Adversarial Preservation}},
  author    = {Bui, Anh and Vuong, Long and Doan, Khanh and Le, Trung and Montague, Paul and Abraham, Tamas and Phung, Dinh},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-4230},
  url       = {https://mlanthology.org/neurips/2024/bui2024neurips-erasing/}
}