Non-Convex Online Learning via Algorithmic Equivalence
Abstract
We study an algorithmic equivalence technique between non-convex gradient descent and convex mirror descent. We start by looking at a harder problem of regret minimization in online non-convex optimization. We show that under certain geometric and smoothness conditions, online gradient descent applied to non-convex functions is an approximation of online mirror descent applied to convex functions under reparameterization. In continuous time, the gradient flow with this reparameterization was shown to be \emph{exactly} equivalent to continuous-time mirror descent by Amid and Warmuth, but theory for the analogous discrete time algorithms is left as an open problem. We prove an $O(T^{\frac{2}{3}})$ regret bound for non-convex online gradient descent in this setting, answering this open problem. Our analysis is based on a new and simple algorithmic equivalence method.
Cite
Text
Ghai et al. "Non-Convex Online Learning via Algorithmic Equivalence." Neural Information Processing Systems, 2022.Markdown
[Ghai et al. "Non-Convex Online Learning via Algorithmic Equivalence." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/ghai2022neurips-nonconvex/)BibTeX
@inproceedings{ghai2022neurips-nonconvex,
title = {{Non-Convex Online Learning via Algorithmic Equivalence}},
author = {Ghai, Udaya and Lu, Zhou and Hazan, Elad},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/ghai2022neurips-nonconvex/}
}