Towards Gradient Free and Projection Free Stochastic Optimization

Abstract

This paper focuses on the problem of \emph{constrained} \emph{stochastic} optimization. A zeroth order Frank-Wolfe algorithm is proposed, which in addition to the projection-free nature of the vanilla Frank-Wolfe algorithm makes it gradient free. Under convexity and smoothness assumption, we show that the proposed algorithm converges to the optimal objective function at a rate $O\left(1/T^{1/3}\right)$, where $T$ denotes the iteration count. In particular, the primal sub-optimality gap is shown to have a dimension dependence of $O\left(d^{1/3}\right)$, which is the best known dimension dependence among all zeroth order optimization algorithms with one directional derivative per iteration. For non-convex functions, we obtain the \emph{Frank-Wolfe} gap to be $O\left(d^{1/3}T^{-1/4}\right)$. Experiments on black-box optimization setups demonstrate the efficacy of the proposed algorithm.

Cite

Text

Sahu et al. "Towards Gradient Free and Projection Free Stochastic Optimization." Artificial Intelligence and Statistics, 2019.

Markdown

[Sahu et al. "Towards Gradient Free and Projection Free Stochastic Optimization." Artificial Intelligence and Statistics, 2019.](https://mlanthology.org/aistats/2019/sahu2019aistats-gradient/)

BibTeX

@inproceedings{sahu2019aistats-gradient,
  title     = {{Towards Gradient Free and Projection Free Stochastic Optimization}},
  author    = {Sahu, Anit Kumar and Zaheer, Manzil and Kar, Soummya},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2019},
  pages     = {3468-3477},
  volume    = {89},
  url       = {https://mlanthology.org/aistats/2019/sahu2019aistats-gradient/}
}