When Are Concepts Erased from Diffusion Models?
Abstract
In concept erasure, a model is modified to selectively prevent it from generating a target concept. Despite the rapid development of new methods, it remains unclear how thoroughly these approaches remove the target concept from the model. We begin by proposing two conceptual models for the erasure mechanism in diffusion models: (i) interfering with the model’s internal guidance processes, and (ii) reducing the unconditional likelihood of generating the target concept, potentially removing it entirely. To assess whether a concept has been truly erased from the model, we introduce a comprehensive suite of independent probing techniques: supplying visual context, modifying the diffusion trajectory, applying classifier guidance, and analyzing the model's alternative generations that emerge in place of the erased concept. Our results shed light on the value of exploring concept erasure robustness outside of adversarial text inputs, and emphasize the importance of comprehensive evaluations for erasure in diffusion models.
Cite
Text
Lu et al. "When Are Concepts Erased from Diffusion Models?." Advances in Neural Information Processing Systems, 2025.Markdown
[Lu et al. "When Are Concepts Erased from Diffusion Models?." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/lu2025neurips-concepts/)BibTeX
@inproceedings{lu2025neurips-concepts,
title = {{When Are Concepts Erased from Diffusion Models?}},
author = {Lu, Kevin and Kriplani, Nicky and Gandikota, Rohit and Pham, Minh and Bau, David and Hegde, Chinmay and Cohen, Niv},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/lu2025neurips-concepts/}
}