Stochastic Mirror Descent in Variationally Coherent Optimization Problems

Abstract

In this paper, we examine a class of non-convex stochastic optimization problems which we call variationally coherent, and which properly includes pseudo-/quasiconvex and star-convex optimization problems. To solve such problems, we focus on the widely used stochastic mirror descent (SMD) family of algorithms (which contains stochastic gradient descent as a special case), and we show that the last iterate of SMD converges to the problem’s solution set with probability 1. This result contributes to the landscape of non-convex stochastic optimization by clarifying that neither pseudo-/quasi-convexity nor star-convexity is essential for (almost sure) global convergence; rather, variational coherence, a much weaker requirement, suffices. Characterization of convergence rates for the subclass of strongly variationally coherent optimization problems as well as simulation results are also presented.

Cite

Text

Zhou et al. "Stochastic Mirror Descent in Variationally Coherent Optimization Problems." Neural Information Processing Systems, 2017.

Markdown

[Zhou et al. "Stochastic Mirror Descent in Variationally Coherent Optimization Problems." Neural Information Processing Systems, 2017.](https://mlanthology.org/neurips/2017/zhou2017neurips-stochastic/)

BibTeX

@inproceedings{zhou2017neurips-stochastic,
  title     = {{Stochastic Mirror Descent in Variationally Coherent Optimization Problems}},
  author    = {Zhou, Zhengyuan and Mertikopoulos, Panayotis and Bambos, Nicholas and Boyd, Stephen and Glynn, Peter W.},
  booktitle = {Neural Information Processing Systems},
  year      = {2017},
  pages     = {7040-7049},
  url       = {https://mlanthology.org/neurips/2017/zhou2017neurips-stochastic/}
}