On the Inherent Privacy of Two Point Zeroth Order Projected Gradient Descent

Abstract

Differentially private zeroth-order optimization methods have recently gained popularity in private fine tuning of machine learning models due to their favorable empirical performance and reduced memory requirements. Current approaches for privatizing zeroth-order methods rely on adding Gaussian noise to the estimated zeroth-order gradients. However, because the search direction in these methods is inherently random, researchers including Tang et al. and Zhang et al. have raised an important fundamental question: is the inherent noise in zeroth-order estimators sufficient to ensure the overall differential privacy of the algorithm? This work settles this fundamental question for a class of oracle-based optimization algorithms where the oracle returns zeroth-order gradient estimates. In particular, we show that for a fixed initialization, there exist strongly convex objective functions such that running Projected Zeroth-Order Gradient Descent (ZO-GD) is not differentially private. Moreover, we show that, even with random initialization, the privacy loss of ZO-GD increases superlinearly with the number of iterations when minimizing convex objective functions.

Cite

Text

Gupta et al. "On the Inherent Privacy of Two Point Zeroth Order Projected Gradient Descent." NeurIPS 2024 Workshops: OPT, 2024.

Markdown

[Gupta et al. "On the Inherent Privacy of Two Point Zeroth Order Projected Gradient Descent." NeurIPS 2024 Workshops: OPT, 2024.](https://mlanthology.org/neuripsw/2024/gupta2024neuripsw-inherent/)

BibTeX

@inproceedings{gupta2024neuripsw-inherent,
  title     = {{On the Inherent Privacy of Two Point Zeroth Order Projected Gradient Descent}},
  author    = {Gupta, Devansh and Razaviyayn, Meisam and Sharan, Vatsal},
  booktitle = {NeurIPS 2024 Workshops: OPT},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/gupta2024neuripsw-inherent/}
}