Riemannian Stochastic Recursive Momentum Method for Non-Convex Optimization

Abstract

We propose a stochastic recursive momentum method for Riemannian non-convex optimization that achieves a nearly-optimal complexity to find epsilon-approximate solution with one sample. The new algorithm requires one-sample gradient evaluations per iteration and does not require restarting with a large batch gradient, which is commonly used to obtain a faster rate. Extensive experiment results demonstrate the superiority of the proposed algorithm. Extensions to nonsmooth and constrained optimization settings are also discussed.

Cite

Text

Han and Gao. "Riemannian Stochastic Recursive Momentum Method for Non-Convex Optimization." International Joint Conference on Artificial Intelligence, 2021. doi:10.24963/IJCAI.2021/345

Markdown

[Han and Gao. "Riemannian Stochastic Recursive Momentum Method for Non-Convex Optimization." International Joint Conference on Artificial Intelligence, 2021.](https://mlanthology.org/ijcai/2021/han2021ijcai-riemannian/) doi:10.24963/IJCAI.2021/345

BibTeX

@inproceedings{han2021ijcai-riemannian,
  title     = {{Riemannian Stochastic Recursive Momentum Method for Non-Convex Optimization}},
  author    = {Han, Andi and Gao, Junbin},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {2505-2511},
  doi       = {10.24963/IJCAI.2021/345},
  url       = {https://mlanthology.org/ijcai/2021/han2021ijcai-riemannian/}
}