Private Zeroth-Order Nonsmooth Nonconvex Optimization
Abstract
We introduce a new zeroth-order algorithm for private stochastic optimization on nonconvex and nonsmooth objectives. Given a dataset of size $M$, our algorithm ensures $(\alpha,\alpha\rho^2/2)$-Renyi differential privacy and finds a $(\delta,\epsilon)$-stationary point so long as $M=\tilde\Omega(\frac{d}{\delta\epsilon^3} + \frac{d^{3/2}}{\rho\delta\epsilon^2})$. This matches the optimal complexity found in its non-private zeroth-order analog. Notably, although the objective is not smooth, we have privacy ``for free'' when $\rho \ge \sqrt{d}\epsilon$.
Cite
Text
Zhang et al. "Private Zeroth-Order Nonsmooth Nonconvex Optimization." International Conference on Learning Representations, 2024.Markdown
[Zhang et al. "Private Zeroth-Order Nonsmooth Nonconvex Optimization." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/zhang2024iclr-private/)BibTeX
@inproceedings{zhang2024iclr-private,
title = {{Private Zeroth-Order Nonsmooth Nonconvex Optimization}},
author = {Zhang, Qinzi and Tran, Hoang and Cutkosky, Ashok},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/zhang2024iclr-private/}
}