Membership Inference Attack on Diffusion Models via Quantile Regression
Abstract
Recently, diffusion models have demonstrated great potential for image synthesis due to their ability to generate high-quality synthetic data. However, when applied to sensitive data, privacy concerns have been raised about these models. In this paper, we evaluate the privacy risks of diffusion models through a \emph{membership inference (MI) attack}, which aims to identify whether a target example is in the training set when given the trained diffusion model. Our proposed MI attack learns a single quantile regression model that predicts (a quantile of) the distribution of reconstruction loss for each example. This enables us to identify a unique threshold on the reconstruction loss tailored to each example when determining their membership status. We show that our attack outperforms the prior state-of-the-art MI attack and avoids their high computational cost from training multiple shadow models. Consequently, our work enriches the set of practical tools for auditing the privacy risks of large-scale generative models.
Cite
Text
Wu et al. "Membership Inference Attack on Diffusion Models via Quantile Regression." NeurIPS 2023 Workshops: RegML, 2023.Markdown
[Wu et al. "Membership Inference Attack on Diffusion Models via Quantile Regression." NeurIPS 2023 Workshops: RegML, 2023.](https://mlanthology.org/neuripsw/2023/wu2023neuripsw-membership/)BibTeX
@inproceedings{wu2023neuripsw-membership,
title = {{Membership Inference Attack on Diffusion Models via Quantile Regression}},
author = {Wu, Steven and Tang, Shuai and Aydore, Sergul and Kearns, Michael and Roth, Aaron},
booktitle = {NeurIPS 2023 Workshops: RegML},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/wu2023neuripsw-membership/}
}