Competing Against Adaptive Strategies in Online Learning via Hints
Abstract
For many of the classic online learning settings, it is known that having a “hint” about the loss function before making a prediction yields significantly better regret guarantees. In this work we study the question, do hints allow us to go beyond the standard notion of regret (which competes against the best fixed strategy) and compete against adaptive or dynamic strategies? After all, if hints were perfect, we can clearly compete against a fully dynamic strategy. For some common online learning settings, we provide upper and lower bounds for the switching regret, i.e., the difference between the loss incurred by the algorithm and the optimal strategy in hindsight that switches state at most $L$ times, where $L$ is some parameter. We show positive results for online linear optimization and the classic experts problem. Interestingly, such results turn out to be impossible for the classic bandit setting.
Cite
Text
Bhaskara and Munagala. "Competing Against Adaptive Strategies in Online Learning via Hints." Artificial Intelligence and Statistics, 2023.Markdown
[Bhaskara and Munagala. "Competing Against Adaptive Strategies in Online Learning via Hints." Artificial Intelligence and Statistics, 2023.](https://mlanthology.org/aistats/2023/bhaskara2023aistats-competing/)BibTeX
@inproceedings{bhaskara2023aistats-competing,
title = {{Competing Against Adaptive Strategies in Online Learning via Hints}},
author = {Bhaskara, Aditya and Munagala, Kamesh},
booktitle = {Artificial Intelligence and Statistics},
year = {2023},
pages = {10409-10424},
volume = {206},
url = {https://mlanthology.org/aistats/2023/bhaskara2023aistats-competing/}
}