Zeroth-Order (Non)-Convex Stochastic Optimization via Conditional Gradient and Gradient Updates

Abstract

In this paper, we propose and analyze zeroth-order stochastic approximation algorithms for nonconvex and convex optimization. Specifically, we propose generalizations of the conditional gradient algorithm achieving rates similar to the standard stochastic gradient algorithm using only zeroth-order information. Furthermore, under a structural sparsity assumption, we first illustrate an implicit regularization phenomenon where the standard stochastic gradient algorithm with zeroth-order information adapts to the sparsity of the problem at hand by just varying the step-size. Next, we propose a truncated stochastic gradient algorithm with zeroth-order information, whose rate of convergence depends only poly-logarithmically on the dimensionality.

Cite

Text

Balasubramanian and Ghadimi. "Zeroth-Order (Non)-Convex Stochastic Optimization via Conditional Gradient and Gradient Updates." Neural Information Processing Systems, 2018.

Markdown

[Balasubramanian and Ghadimi. "Zeroth-Order (Non)-Convex Stochastic Optimization via Conditional Gradient and Gradient Updates." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/balasubramanian2018neurips-zerothorder/)

BibTeX

@inproceedings{balasubramanian2018neurips-zerothorder,
  title     = {{Zeroth-Order (Non)-Convex Stochastic Optimization via Conditional Gradient and Gradient Updates}},
  author    = {Balasubramanian, Krishnakumar and Ghadimi, Saeed},
  booktitle = {Neural Information Processing Systems},
  year      = {2018},
  pages     = {3455-3464},
  url       = {https://mlanthology.org/neurips/2018/balasubramanian2018neurips-zerothorder/}
}