Practical and Private (Deep) Learning Without Sampling or Shuffling
Abstract
We consider training models with differential privacy (DP) using mini-batch gradients. The existing state-of-the-art, Differentially Private Stochastic Gradient Descent (DP-SGD), requires \emph{privacy amplification by sampling or shuffling} to obtain the best privacy/accuracy/computation trade-offs. Unfortunately, the precise requirements on exact sampling and shuffling can be hard to obtain in important practical scenarios, particularly federated learning (FL). We design and analyze a DP variant of Follow-The-Regularized-Leader (DP-FTRL) that compares favorably (both theoretically and empirically) to amplified DP-SGD, while allowing for much more flexible data access patterns. DP-FTRL does not use any form of privacy amplification.
Cite
Text
Kairouz et al. "Practical and Private (Deep) Learning Without Sampling or Shuffling." International Conference on Machine Learning, 2021.Markdown
[Kairouz et al. "Practical and Private (Deep) Learning Without Sampling or Shuffling." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/kairouz2021icml-practical/)BibTeX
@inproceedings{kairouz2021icml-practical,
title = {{Practical and Private (Deep) Learning Without Sampling or Shuffling}},
author = {Kairouz, Peter and Mcmahan, Brendan and Song, Shuang and Thakkar, Om and Thakurta, Abhradeep and Xu, Zheng},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {5213-5225},
volume = {139},
url = {https://mlanthology.org/icml/2021/kairouz2021icml-practical/}
}