Unconstrained Online Learning with Unbounded Losses
Abstract
Algorithms for online learning typically require one or more boundedness assumptions: that the domain is bounded, that the losses are Lipschitz, or both. In this paper, we develop a new setting for online learning with unbounded domains and non-Lipschitz losses. For this setting we provide an algorithm which guarantees $R_{T}(u)\le \tilde O(G\|u\|\sqrt{T}+L\|u\|^{2}\sqrt{T})$ regret on any problem where the subgradients satisfy $\|g_{t}\|\le G+L\|w_{t}\|$, and show that this bound is unimprovable without further assumptions. We leverage this algorithm to develop new saddle-point optimization algorithms that converge in duality gap in unbounded domains, even in the absence of meaningful curvature. Finally, we provide the first algorithm achieving non-trivial dynamic regret in an unbounded domain for non-Lipschitz losses, as well as a matching lower bound. The regret of our dynamic regret algorithm automatically improves to a novel $L^{*}$ bound when the losses are smooth.
Cite
Text
Jacobsen and Cutkosky. "Unconstrained Online Learning with Unbounded Losses." International Conference on Machine Learning, 2023.Markdown
[Jacobsen and Cutkosky. "Unconstrained Online Learning with Unbounded Losses." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/jacobsen2023icml-unconstrained/)BibTeX
@inproceedings{jacobsen2023icml-unconstrained,
title = {{Unconstrained Online Learning with Unbounded Losses}},
author = {Jacobsen, Andrew and Cutkosky, Ashok},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {14590-14630},
volume = {202},
url = {https://mlanthology.org/icml/2023/jacobsen2023icml-unconstrained/}
}