Learning-Rate-Free Stochastic Optimization over Riemannian Manifolds
Abstract
In recent years, interest in gradient-based optimization over Riemannian manifolds has surged. However, a significant challenge lies in the reliance on hyperparameters, especially the learning rate, which requires meticulous tuning by practitioners to ensure convergence at a suitable rate. In this work, we introduce innovative learning-rate-free algorithms for stochastic optimization over Riemannian manifolds, eliminating the need for hand-tuning and providing a more robust and user-friendly approach. We establish high probability convergence guarantees that are optimal, up to logarithmic factors, compared to the best-known optimally tuned rate in the deterministic setting. Our approach is validated through numerical experiments, demonstrating competitive performance against learning-rate-dependent algorithms.
Cite
Text
Dodd et al. "Learning-Rate-Free Stochastic Optimization over Riemannian Manifolds." International Conference on Machine Learning, 2024.Markdown
[Dodd et al. "Learning-Rate-Free Stochastic Optimization over Riemannian Manifolds." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/dodd2024icml-learningratefree/)BibTeX
@inproceedings{dodd2024icml-learningratefree,
title = {{Learning-Rate-Free Stochastic Optimization over Riemannian Manifolds}},
author = {Dodd, Daniel and Sharrock, Louis and Nemeth, Christopher},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {11105-11148},
volume = {235},
url = {https://mlanthology.org/icml/2024/dodd2024icml-learningratefree/}
}