Finding Global Optima in Nonconvex Stochastic Semidefinite Optimization with Variance Reduction
Abstract
There is a recent surge of interest in nonconvex reformulations via low-rank factorization for stochastic convex semidefinite optimization problem in the purpose of efficiency and scalability. Compared with the original convex formulations, the nonconvex ones typically involve much fewer variables, allowing them to scale to scenarios with millions of variables. However, it opens a new challenge that under what conditions the nonconvex stochastic algorithms may find the global optima effectively despite their empirical success in applications. In this paper, we provide an answer that a stochastic gradient descent method with variance reduction, can be adapted to solve the nonconvex reformulation of the original convex problem, with a \textit{global linear convergence}, i.e., converging to a global optimum exponentially fast, at a proper initial choice in the restricted strongly convex case. Experimental studies on both simulation and real-world applications on ordinal embedding are provided to show the effectiveness of the proposed algorithms.
Cite
Text
Zeng et al. "Finding Global Optima in Nonconvex Stochastic Semidefinite Optimization with Variance Reduction." International Conference on Artificial Intelligence and Statistics, 2018.Markdown
[Zeng et al. "Finding Global Optima in Nonconvex Stochastic Semidefinite Optimization with Variance Reduction." International Conference on Artificial Intelligence and Statistics, 2018.](https://mlanthology.org/aistats/2018/zeng2018aistats-finding/)BibTeX
@inproceedings{zeng2018aistats-finding,
title = {{Finding Global Optima in Nonconvex Stochastic Semidefinite Optimization with Variance Reduction}},
author = {Zeng, Jinshan and Ma, Ke and Yao, Yuan},
booktitle = {International Conference on Artificial Intelligence and Statistics},
year = {2018},
pages = {199-207},
url = {https://mlanthology.org/aistats/2018/zeng2018aistats-finding/}
}