Escaping from Saddle Points on Riemannian Manifolds
Abstract
We consider minimizing a nonconvex, smooth function $f$ on a Riemannian manifold $\mathcal{M}$. We show that a perturbed version of the gradient descent algorithm converges to a second-order stationary point for this problem (and hence is able to escape saddle points on the manifold). While the unconstrained problem is well-studied, our result is the first to prove such a rate for nonconvex, manifold-constrained problems. The rate of convergence depends as $1/\epsilon^2$ on the accuracy $\epsilon$, which matches a rate known only for unconstrained smooth minimization. The convergence rate also has a polynomial dependence on the parameters denoting the curvature of the manifold and the smoothness of the function.
Cite
Text
Sun et al. "Escaping from Saddle Points on Riemannian Manifolds." Neural Information Processing Systems, 2019.Markdown
[Sun et al. "Escaping from Saddle Points on Riemannian Manifolds." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/sun2019neurips-escaping/)BibTeX
@inproceedings{sun2019neurips-escaping,
title = {{Escaping from Saddle Points on Riemannian Manifolds}},
author = {Sun, Yue and Flammarion, Nicolas and Fazel, Maryam},
booktitle = {Neural Information Processing Systems},
year = {2019},
pages = {7276-7286},
url = {https://mlanthology.org/neurips/2019/sun2019neurips-escaping/}
}