Differentially Private Online-to-Batch for Smooth Losses

Abstract

We develop a new reduction that converts any online convex optimization algorithm suffering $O(\sqrt{T})$ regret into an $\epsilon$-differentially private stochastic convex optimization algorithm with the optimal convergence rate $\tilde O(1/\sqrt{T} + 1/\epsilon T)$ on smooth losses in linear time, forming a direct analogy to the classical non-private ``online-to-batch'' conversion. By applying our techniques to more advanced adaptive online algorithms, we produce adaptive differentially private counterparts whose convergence rates depend on apriori unknown variances or parameter norms.

Cite

Text

Zhang et al. "Differentially Private Online-to-Batch for Smooth Losses." Neural Information Processing Systems, 2022.

Markdown

[Zhang et al. "Differentially Private Online-to-Batch for Smooth Losses." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/zhang2022neurips-differentially/)

BibTeX

@inproceedings{zhang2022neurips-differentially,
  title     = {{Differentially Private Online-to-Batch for Smooth Losses}},
  author    = {Zhang, Qinzi and Tran, Hoang and Cutkosky, Ashok},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/zhang2022neurips-differentially/}
}