Adaptive Hyperparameter Selection for Differentially Private Gradient Descent
Abstract
We present an adaptive mechanism for hyperparameter selection in differentially private optimization that addresses the inherent trade-off between utility and privacy. The mechanism eliminates the often unstructured and time-consuming manual effort of selecting hyperparameters and avoids the additional privacy costs that hyperparameter selection otherwise incurs on top of that of the actual algorithm. We instantiate our mechanism for noisy gradient descent on non-convex, convex and strongly convex loss functions, respectively, to derive schedules for the noise variance and step size. These schedules account for the properties of the loss function and adapt to convergence metrics such as the gradient norm. When using these schedules, we show that noisy gradient descent converges at essentially the same rate as its noise-free counterpart. Numerical experiments show that the schedules consistently perform well across a range of datasets without manual tuning.
Cite
Text
Fay et al. "Adaptive Hyperparameter Selection for Differentially Private Gradient Descent." Transactions on Machine Learning Research, 2023.Markdown
[Fay et al. "Adaptive Hyperparameter Selection for Differentially Private Gradient Descent." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/fay2023tmlr-adaptive/)BibTeX
@article{fay2023tmlr-adaptive,
title = {{Adaptive Hyperparameter Selection for Differentially Private Gradient Descent}},
author = {Fay, Dominik and Magnússon, Sindri and Sjölund, Jens and Johansson, Mikael},
journal = {Transactions on Machine Learning Research},
year = {2023},
url = {https://mlanthology.org/tmlr/2023/fay2023tmlr-adaptive/}
}