Stochastic Zeroth-Order Optimization in High Dimensions

Abstract

We consider the problem of optimizing a high-dimensional convex function using stochastic zeroth-order queries. Under sparsity assumptions on the gradients or function values, we present two algorithms: a successive component/feature selection algorithm and a noisy mirror descent algorithm using Lasso gradient estimates, and show that both algorithms have convergence rates that de- pend only logarithmically on the ambient dimension of the problem. Empirical results confirm our theoretical findings and show that the algorithms we design outperform classical zeroth-order optimization methods in the high-dimensional setting.

Cite

Text

Wang et al. "Stochastic Zeroth-Order Optimization in High Dimensions." International Conference on Artificial Intelligence and Statistics, 2018.

Markdown

[Wang et al. "Stochastic Zeroth-Order Optimization in High Dimensions." International Conference on Artificial Intelligence and Statistics, 2018.](https://mlanthology.org/aistats/2018/wang2018aistats-stochastic/)

BibTeX

@inproceedings{wang2018aistats-stochastic,
  title     = {{Stochastic Zeroth-Order Optimization in High Dimensions}},
  author    = {Wang, Yining and Du, Simon S. and Balakrishnan, Sivaraman and Singh, Aarti},
  booktitle = {International Conference on Artificial Intelligence and Statistics},
  year      = {2018},
  pages     = {1356-1365},
  url       = {https://mlanthology.org/aistats/2018/wang2018aistats-stochastic/}
}