Optimal Comparator Adaptive Online Learning with Switching Cost

Abstract

Practical online learning tasks are often naturally defined on unconstrained domains, where optimal algorithms for general convex losses are characterized by the notion of comparator adaptivity. In this paper, we design such algorithms in the presence of switching cost - the latter penalizes the typical optimism in adaptive algorithms, leading to a delicate design trade-off. Based on a novel dual space scaling strategy discovered by a continuous-time analysis, we propose a simple algorithm that improves the existing comparator adaptive regret bound [ZCP22a] to the optimal rate. The obtained benefits are further extended to the expert setting, and the practicality of the proposed algorithm is demonstrated through a sequential investment task.

Cite

Text

Zhang et al. "Optimal Comparator Adaptive Online Learning with Switching Cost." Neural Information Processing Systems, 2022.

Markdown

[Zhang et al. "Optimal Comparator Adaptive Online Learning with Switching Cost." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/zhang2022neurips-optimal/)

BibTeX

@inproceedings{zhang2022neurips-optimal,
  title     = {{Optimal Comparator Adaptive Online Learning with Switching Cost}},
  author    = {Zhang, Zhiyu and Cutkosky, Ashok and Paschalidis, Yannis},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/zhang2022neurips-optimal/}
}