Differentially Private Image Classification by Learning Priors from Random Processes
Abstract
In privacy-preserving machine learning, differentially private stochastic gradient descent (DP-SGD) performs worse than SGD due to per-sample gradient clipping and noise addition.A recent focus in private learning research is improving the performance of DP-SGD on private data by incorporating priors that are learned on real-world public data.In this work, we explore how we can improve the privacy-utility tradeoff of DP-SGD by learning priors from images generated by random processes and transferring these priors to private data. We propose DP-RandP, a three-phase approach. We attain new state-of-the-art accuracy when training from scratch on CIFAR10, CIFAR100, MedMNIST and ImageNet for a range of privacy budgets $\\varepsilon \\in [1, 8]$. In particular, we improve the previous best reported accuracy on CIFAR10 from $60.6 \\%$ to $72.3 \\%$ for $\\varepsilon=1$.
Cite
Text
Tang et al. "Differentially Private Image Classification by Learning Priors from Random Processes." Neural Information Processing Systems, 2023.Markdown
[Tang et al. "Differentially Private Image Classification by Learning Priors from Random Processes." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/tang2023neurips-differentially/)BibTeX
@inproceedings{tang2023neurips-differentially,
title = {{Differentially Private Image Classification by Learning Priors from Random Processes}},
author = {Tang, Xinyu and Panda, Ashwinee and Sehwag, Vikash and Mittal, Prateek},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/tang2023neurips-differentially/}
}