Anytime Model Selection in Linear Bandits
Abstract
Model selection in the context of bandit optimization is a challenging problem, as it requires balancing exploration and exploitation not only for action selection, but also for model selection. One natural approach is to rely on online learning algorithms that treat different models as experts. Existing methods, however, scale poorly ($\mathrm{poly}M$) with the number of models $M$ in terms of their regret. We develop \alexp, an anytime algorithm, which has an exponentially improved ($\log M$) dependence on $M$ for its regret. We neither require knowledge of the horizon $n$, nor rely on an initial purely exploratory stage. Our approach utilizes a novel time-uniform analysis of the Lasso, by defining a self-normalized martingale sequence based on the empirical process error, establishing a new connection between interactive learning and high-dimensional statistics.
Cite
Text
Kassraie et al. "Anytime Model Selection in Linear Bandits." NeurIPS 2023 Workshops: ReALML, 2023.Markdown
[Kassraie et al. "Anytime Model Selection in Linear Bandits." NeurIPS 2023 Workshops: ReALML, 2023.](https://mlanthology.org/neuripsw/2023/kassraie2023neuripsw-anytime/)BibTeX
@inproceedings{kassraie2023neuripsw-anytime,
title = {{Anytime Model Selection in Linear Bandits}},
author = {Kassraie, Parnian and Emmenegger, Nicolas and Krause, Andreas and Pacchiano, Aldo},
booktitle = {NeurIPS 2023 Workshops: ReALML},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/kassraie2023neuripsw-anytime/}
}