A Closer Look at Adaptive Regret
Abstract
For the prediction with expert advice setting, we consider methods to construct algorithms that have low adaptive regret. The adaptive regret of an algorithm on a time interval [ t _1, t _2] is the loss of the algorithm there minus the loss of the best expert. Adaptive regret measures how well the algorithm approximates the best expert locally, and it is therefore somewhere between the classical regret (measured on all outcomes) and the tracking regret, where the algorithm is compared to a good sequence of experts. We investigate two existing intuitive methods to derive algorithms with low adaptive regret, one based on specialist experts and the other based on restarts. Quite surprisingly, we show that both methods lead to the same algorithm, namely Fixed Share, which is known for its tracking regret. Our main result is a thorough analysis of the adaptive regret of Fixed Share. We obtain the exact worst-case adaptive regret for Fixed Share, from which the classical tracking bounds can be derived. We also prove that Fixed Share is optimal, in the sense that no algorithm can have a better adaptive regret bound.
Cite
Text
Adamskiy et al. "A Closer Look at Adaptive Regret." International Conference on Algorithmic Learning Theory, 2012. doi:10.1007/978-3-642-34106-9_24Markdown
[Adamskiy et al. "A Closer Look at Adaptive Regret." International Conference on Algorithmic Learning Theory, 2012.](https://mlanthology.org/alt/2012/adamskiy2012alt-closer/) doi:10.1007/978-3-642-34106-9_24BibTeX
@inproceedings{adamskiy2012alt-closer,
title = {{A Closer Look at Adaptive Regret}},
author = {Adamskiy, Dmitry and Koolen, Wouter M. and Chernov, Alexey V. and Vovk, Vladimir},
booktitle = {International Conference on Algorithmic Learning Theory},
year = {2012},
pages = {290-304},
doi = {10.1007/978-3-642-34106-9_24},
url = {https://mlanthology.org/alt/2012/adamskiy2012alt-closer/}
}