VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models
Abstract
Diffusion Models (DMs) are state-of-the-art generative models that learn a reversible corruption process from iterative noise addition and denoising. They are the backbone of many generative AI applications, such as text-to-image conditional generation. However, recent studies have shown that basic unconditional DMs (e.g., DDPM and DDIM) are vulnerable to backdoor injection, a type of output manipulation attack triggered by a maliciously embedded pattern at model input. This paper presents a unified backdoor attack framework (VillanDiffusion) to expand the current scope of backdoor analysis for DMs. Our framework covers mainstream unconditional and conditional DMs (denoising-based and score-based) and various training-free samplers for holistic evaluations. Experiments show that our unified framework facilitates the backdoor analysis of different DM configurations and provides new insights into caption-based backdoor attacks on DMs.
Cite
Text
Chou et al. "VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models." NeurIPS 2023 Workshops: BUGS, 2023.Markdown
[Chou et al. "VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models." NeurIPS 2023 Workshops: BUGS, 2023.](https://mlanthology.org/neuripsw/2023/chou2023neuripsw-villandiffusion/)BibTeX
@inproceedings{chou2023neuripsw-villandiffusion,
title = {{VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models}},
author = {Chou, Sheng-Yen and Chen, Pin-Yu and Ho, Tsung-Yi},
booktitle = {NeurIPS 2023 Workshops: BUGS},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/chou2023neuripsw-villandiffusion/}
}