Differentially Private Generation of High Fidelity Samples from Diffusion Models
Abstract
Diffusion based generative models achieve unprecedented image quality but are known to leak private information about the training data. Our goal is to provide provable guarantees on privacy leakage of training data while simultaneously enabling generation of high-fidelity samples. Our proposed approach first non-privately trains an ensemble of diffusion models and then aggregates their prediction to provide privacy guarantees for generated samples. We demonstrate the success of our approach on the MNIST and CIFAR-10.
Cite
Text
Sehwag et al. "Differentially Private Generation of High Fidelity Samples from Diffusion Models." ICML 2023 Workshops: DeployableGenerativeAI, 2023.Markdown
[Sehwag et al. "Differentially Private Generation of High Fidelity Samples from Diffusion Models." ICML 2023 Workshops: DeployableGenerativeAI, 2023.](https://mlanthology.org/icmlw/2023/sehwag2023icmlw-differentially/)BibTeX
@inproceedings{sehwag2023icmlw-differentially,
title = {{Differentially Private Generation of High Fidelity Samples from Diffusion Models}},
author = {Sehwag, Vikash and Panda, Ashwinee and Pokle, Ashwini and Tang, Xinyu and Mahloujifar, Saeed and Chiang, Mung and Kolter, J Zico and Mittal, Prateek},
booktitle = {ICML 2023 Workshops: DeployableGenerativeAI},
year = {2023},
url = {https://mlanthology.org/icmlw/2023/sehwag2023icmlw-differentially/}
}