Continual Learning for Forgetting in Deep Generative Models
Abstract
The recent proliferation of large-scale text-to-image models has led to growing concerns that such models may be misused to generate harmful, misleading, and inappropriate content. Motivated by this issue, we derive a technique inspired by continual learning to selectively forget concepts in pretrained text-to-image generative models. Our method enables controllable forgetting, where a user can specify how a concept should be forgotten. We apply our method to the open-source Stable Diffusion model and focus on tackling the problem of deepfakes, where experiments show that the model effectively forgets the depictions of various celebrities.
Cite
Text
Heng and Soh. "Continual Learning for Forgetting in Deep Generative Models." ICML 2023 Workshops: DeployableGenerativeAI, 2023.Markdown
[Heng and Soh. "Continual Learning for Forgetting in Deep Generative Models." ICML 2023 Workshops: DeployableGenerativeAI, 2023.](https://mlanthology.org/icmlw/2023/heng2023icmlw-continual/)BibTeX
@inproceedings{heng2023icmlw-continual,
title = {{Continual Learning for Forgetting in Deep Generative Models}},
author = {Heng, Alvin and Soh, Harold},
booktitle = {ICML 2023 Workshops: DeployableGenerativeAI},
year = {2023},
url = {https://mlanthology.org/icmlw/2023/heng2023icmlw-continual/}
}