Differentially Private Learning with Adaptive Clipping
Abstract
Existing approaches for training neural networks with user-level differential privacy (e.g., DP Federated Averaging) in federated learning (FL) settings involve bounding the contribution of each user's model update by {\em clipping} it to some constant value. However there is no good {\em a priori} setting of the clipping norm across tasks and learning settings: the update norm distribution depends on the model architecture and loss, the amount of data on each device, the client learning rate, and possibly various other parameters. We propose a method wherein instead of a fixed clipping norm, one clips to a value at a specified quantile of the update norm distribution, where the value at the quantile is itself estimated online, with differential privacy. The method tracks the quantile closely, uses a negligible amount of privacy budget, is compatible with other federated learning technologies such as compression and secure aggregation, and has a straightforward joint DP analysis with DP-FedAvg. Experiments demonstrate that adaptive clipping to the median update norm works well across a range of federated learning tasks, eliminating the need to tune any clipping hyperparameter.
Cite
Text
Andrew et al. "Differentially Private Learning with Adaptive Clipping." Neural Information Processing Systems, 2021.Markdown
[Andrew et al. "Differentially Private Learning with Adaptive Clipping." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/andrew2021neurips-differentially/)BibTeX
@inproceedings{andrew2021neurips-differentially,
title = {{Differentially Private Learning with Adaptive Clipping}},
author = {Andrew, Galen and Thakkar, Om and McMahan, Brendan and Ramaswamy, Swaroop},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/andrew2021neurips-differentially/}
}