Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation
Abstract
Diffusion models have demonstrated remarkable performance in image generation tasks while also raising security and privacy concerns. To tackle these issues, we propose a method for generating unlearnable examples for diffusion models, Unlearnable Diffusion Perturbation, to safeguard images from unauthorized exploitation. Our approach involves designing an algorithm to generate sample-wise perturbation noise for each image to be protected. We frame this as a max-min optimization problem and introduce EUDP, a noise scheduler-based method to enhance the effectiveness of the protective noise. Our experiments demonstrate that training diffusion models on the protected data leads to a significant reduction in the quality of the generated images.
Cite
Text
Zhao et al. "Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation." ICLR 2024 Workshops: R2-FM, 2024.Markdown
[Zhao et al. "Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation." ICLR 2024 Workshops: R2-FM, 2024.](https://mlanthology.org/iclrw/2024/zhao2024iclrw-unlearnable/)BibTeX
@inproceedings{zhao2024iclrw-unlearnable,
title = {{Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation}},
author = {Zhao, Zhengyue and Duan, Jinhao and Hu, Xing and Xu, Kaidi and Wang, Chenan and Zhang, Rui and Du, Zidong and Guo, Qi and Chen, Yunji},
booktitle = {ICLR 2024 Workshops: R2-FM},
year = {2024},
url = {https://mlanthology.org/iclrw/2024/zhao2024iclrw-unlearnable/}
}