Improved Differentially Private Riemannian Optimization: Fast Sampling and Variance Reduction

Abstract

A common step in differentially private (DP) Riemannian optimization is sampling from the (tangent) Gaussian distribution as noise needs to be generated in the tangent space to perturb the gradient. In this regard, existing works either use the Markov chain Monte Carlo (MCMC) sampling or explicit basis construction based sampling methods on the tangent space. This becomes a computational bottleneck in the practical use of DP Riemannian optimization, especially when performing stochastic optimization. In this paper, we discuss different sampling strategies and develop efficient sampling procedures by exploiting linear isometry between tangent spaces and show them to be orders of magnitude faster than both the MCMC and sampling using explicit basis construction. Furthermore, we develop the DP Riemannian stochastic variance reduced gradient algorithm and compare it with DP Riemannian gradient descent and stochastic gradient descent algorithms on various problems.

Cite

Text

Utpala et al. "Improved Differentially Private Riemannian Optimization: Fast Sampling and Variance Reduction." Transactions on Machine Learning Research, 2023.

Markdown

[Utpala et al. "Improved Differentially Private Riemannian Optimization: Fast Sampling and Variance Reduction." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/utpala2023tmlr-improved/)

BibTeX

@article{utpala2023tmlr-improved,
  title     = {{Improved Differentially Private Riemannian Optimization: Fast Sampling and Variance Reduction}},
  author    = {Utpala, Saiteja and Han, Andi and Jawanpuria, Pratik and Mishra, Bamdev},
  journal   = {Transactions on Machine Learning Research},
  year      = {2023},
  url       = {https://mlanthology.org/tmlr/2023/utpala2023tmlr-improved/}
}