Regret Bounds Without Lipschitz Continuity: Online Learning with Relative-Lipschitz Losses
Abstract
In online convex optimization (OCO), Lipschitz continuity of the functions is commonly assumed in order to obtain sublinear regret. Moreover, many algorithms have only logarithmic regret when these functions are also strongly convex. Recently, researchers from convex optimization proposed the notions of relative Lipschitz continuity'' andrelative strong convexity''. Both of the notions are generalizations of their classical counterparts. It has been shown that subgradient methods in the relative setting have performance analogous to their performance in the classical setting.
Cite
Text
Zhou et al. "Regret Bounds Without Lipschitz Continuity: Online Learning with Relative-Lipschitz Losses." Neural Information Processing Systems, 2020.Markdown
[Zhou et al. "Regret Bounds Without Lipschitz Continuity: Online Learning with Relative-Lipschitz Losses." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/zhou2020neurips-regret/)BibTeX
@inproceedings{zhou2020neurips-regret,
title = {{Regret Bounds Without Lipschitz Continuity: Online Learning with Relative-Lipschitz Losses}},
author = {Zhou, Yihan and Portella, Victor Sanches and Schmidt, Mark and Harvey, Nicholas},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/zhou2020neurips-regret/}
}