Dynamic Regret of Strongly Adaptive Methods

Abstract

To cope with changing environments, recent developments in online learning have introduced the concepts of adaptive regret and dynamic regret independently. In this paper, we illustrate an intrinsic connection between these two concepts by showing that the dynamic regret can be expressed in terms of the adaptive regret and the functional variation. This observation implies that strongly adaptive algorithms can be directly leveraged to minimize the dynamic regret. As a result, we present a series of strongly adaptive algorithms that have small dynamic regrets for convex functions, exponentially concave functions, and strongly convex functions, respectively. To the best of our knowledge, this is the first time that exponential concavity is utilized to upper bound the dynamic regret. Moreover, all of those adaptive algorithms do not need any prior knowledge of the functional variation, which is a significant advantage over previous specialized methods for minimizing dynamic regret.

Cite

Text

Zhang et al. "Dynamic Regret of Strongly Adaptive Methods." International Conference on Machine Learning, 2018.

Markdown

[Zhang et al. "Dynamic Regret of Strongly Adaptive Methods." International Conference on Machine Learning, 2018.](https://mlanthology.org/icml/2018/zhang2018icml-dynamic/)

BibTeX

@inproceedings{zhang2018icml-dynamic,
  title     = {{Dynamic Regret of Strongly Adaptive Methods}},
  author    = {Zhang, Lijun and Yang, Tianbao and Jin,  and Zhou, Zhi-Hua},
  booktitle = {International Conference on Machine Learning},
  year      = {2018},
  pages     = {5882-5891},
  volume    = {80},
  url       = {https://mlanthology.org/icml/2018/zhang2018icml-dynamic/}
}