A General Framework of Riemannian Adaptive Optimization Methods with a Convergence Analysis
Abstract
This paper proposes a general framework of Riemannian adaptive optimization methods. The framework encapsulates several stochastic optimization algorithms on Riemannian manifolds and incorporates the mini-batch strategy that is often used in deep learning. Within this framework, we also propose AMSGrad on embedded submanifolds of Euclidean space. Moreover, we give convergence analyses valid for both a constant and a diminishing step size. Our analyses also reveal the relationship between the convergence rate and mini-batch size. In numerical experiments, we applied the proposed algorithm to principal component analysis and the low-rank matrix completion problem, which can be considered to be Riemannian optimization problems. Python implementations of the methods used in the numerical experiments are available at https://github.com/iiduka-researches/202408-adaptive.
Cite
Text
Sakai and Iiduka. "A General Framework of Riemannian Adaptive Optimization Methods with a Convergence Analysis." Transactions on Machine Learning Research, 2025.Markdown
[Sakai and Iiduka. "A General Framework of Riemannian Adaptive Optimization Methods with a Convergence Analysis." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/sakai2025tmlr-general/)BibTeX
@article{sakai2025tmlr-general,
title = {{A General Framework of Riemannian Adaptive Optimization Methods with a Convergence Analysis}},
author = {Sakai, Hiroyuki and Iiduka, Hideaki},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/sakai2025tmlr-general/}
}