The Lazy Online Subgradient Algorithm Is Universal on Strongly Convex Domains

Abstract

We study Online Lazy Gradient Descent for optimisation on a strongly convex domain. The algorithm is known to achieve $O(\sqrt N)$ regret against adversarial opponents; here we show it is universal in the sense that it also achieves $O(\log N)$ expected regret against i.i.d opponents. This improves upon the more complex meta-algorithm of Huang et al \cite{FTLBall} that only gets $O(\sqrt {N \log N})$ and $ O(\log N)$ bounds. In addition we show that, unlike for the simplex, order bounds for pseudo-regret and expected regret are equivalent for strongly convex domains.

Cite

Text

Anderson and Leith. "The Lazy Online Subgradient Algorithm Is Universal on Strongly Convex Domains." Neural Information Processing Systems, 2021.

Markdown

[Anderson and Leith. "The Lazy Online Subgradient Algorithm Is Universal on Strongly Convex Domains." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/anderson2021neurips-lazy/)

BibTeX

@inproceedings{anderson2021neurips-lazy,
  title     = {{The Lazy Online Subgradient Algorithm Is Universal on Strongly Convex Domains}},
  author    = {Anderson, Daron and Leith, Douglas},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/anderson2021neurips-lazy/}
}