Erasing Concepts from Diffusion Models

Abstract

Motivated by concerns that large-scale diffusion models can produce undesirable output such as sexually explicit content or copyrighted artistic styles, we study erasure of specific concepts from diffusion model weights. We propose a fine-tuning method that can erase a visual concept from a pre-trained diffusion model, given only the name of the style and using negative guidance as a teacher. We benchmark our method against previous approaches that remove sexually explicit content and demonstrate its effectiveness, performing on par with Safe Latent Diffusion and censored training. To evaluate artistic style removal, we conduct experiments erasing five modern artists from the network and conduct a user study to assess the human perception of the removed styles. Unlike previous methods, our approach can remove concepts from a diffusion model permanently rather than modifying the output at the inference time, so it cannot be circumvented even if a user has access to model weights. Our code, data, and results are available at erasing.baulab.info

Cite

Text

Gandikota et al. "Erasing Concepts from Diffusion Models." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.00230

Markdown

[Gandikota et al. "Erasing Concepts from Diffusion Models." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/gandikota2023iccv-erasing/) doi:10.1109/ICCV51070.2023.00230

BibTeX

@inproceedings{gandikota2023iccv-erasing,
  title     = {{Erasing Concepts from Diffusion Models}},
  author    = {Gandikota, Rohit and Materzynska, Joanna and Fiotto-Kaufman, Jaden and Bau, David},
  booktitle = {International Conference on Computer Vision},
  year      = {2023},
  pages     = {2426-2436},
  doi       = {10.1109/ICCV51070.2023.00230},
  url       = {https://mlanthology.org/iccv/2023/gandikota2023iccv-erasing/}
}