Efficient and Robust Algorithms for Adversarial Linear Contextual Bandits
Abstract
We consider an adversarial variant of the classic $K$-armed linear contextual bandit problem where the sequence of loss functions associated with each arm are allowed to change without restriction over time. Under the assumption that the $d$-dimensional contexts are generated i.i.d. at random from a known distribution, we develop computationally efficient algorithms based on the classic Exp3 algorithm. Our first algorithm, RealLinExp3, is shown to achieve a regret guarantee of $\widetilde{O}(\sqrt{KdT})$ over $T$ rounds, which matches the best known lower bound for this problem. Our second algorithm, RobustLinExp3, is shown to be robust to misspecification, in that it achieves a regret bound of $\widetilde{O}((Kd)^{1/3}T^{2/3}) + \varepsilon \sqrt{d} T$ if the true reward function is linear up to an additive nonlinear error uniformly bounded in absolute value by $\varepsilon$. To our knowledge, our performance guarantees constitute the very first results on this problem setting.
Cite
Text
Neu and Olkhovskaya. "Efficient and Robust Algorithms for Adversarial Linear Contextual Bandits." Conference on Learning Theory, 2020.Markdown
[Neu and Olkhovskaya. "Efficient and Robust Algorithms for Adversarial Linear Contextual Bandits." Conference on Learning Theory, 2020.](https://mlanthology.org/colt/2020/neu2020colt-efficient/)BibTeX
@inproceedings{neu2020colt-efficient,
title = {{Efficient and Robust Algorithms for Adversarial Linear Contextual Bandits}},
author = {Neu, Gergely and Olkhovskaya, Julia},
booktitle = {Conference on Learning Theory},
year = {2020},
pages = {3049-3068},
volume = {125},
url = {https://mlanthology.org/colt/2020/neu2020colt-efficient/}
}