Efficient Differentially Private Fine-Tuning of Diffusion Models

Abstract

The recent developments of Diffusion Models (DMs) enable generation of astonishingly high-quality synthetic samples. Recent work showed that the synthetic samples generated by the diffusion model, which is pre-trained on public data and fully fine-tuned with differential privacy on private data, can train a downstream classifier, while achieving a good privacy-utility tradeoff. However, fully fine-tuning such large diffusion models with DP-SGD can be very resource-demanding in terms of memory usage and computation. In this work, we investigate Parameter-Efficient Fine-Tuning (PEFT) of diffusion models using Low-Dimensional Adaptation (LoDA) with Differential Privacy. We evaluate the proposed method with the MNIST and CIFAR-10 datasets and demonstrate that such efficient fine-tuning can also generate useful synthetic samples for training downstream classifiers, with guaranteed privacy protection of fine-tuning data. Our source code will be made available on GitHub.

Cite

Text

Liu et al. "Efficient Differentially Private Fine-Tuning of Diffusion Models." ICML 2024 Workshops: NextGenAISafety, 2024.

Markdown

[Liu et al. "Efficient Differentially Private Fine-Tuning of Diffusion Models." ICML 2024 Workshops: NextGenAISafety, 2024.](https://mlanthology.org/icmlw/2024/liu2024icmlw-efficient/)

BibTeX

@inproceedings{liu2024icmlw-efficient,
  title     = {{Efficient Differentially Private Fine-Tuning of Diffusion Models}},
  author    = {Liu, Jing and Lowy, Andrew and Koike-Akino, Toshiaki and Parsons, Kieran and Wang, Ye},
  booktitle = {ICML 2024 Workshops: NextGenAISafety},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/liu2024icmlw-efficient/}
}