Impact of Representation Learning in Linear Bandits
Abstract
We study how representation learning can improve the efficiency of bandit problems. We study the setting where we play $T$ linear bandits with dimension $d$ concurrently, and these $T$ bandit tasks share a common $k (\ll d)$ dimensional linear representation. For the finite-action setting, we present a new algorithm which achieves $\widetilde{O}(T\sqrt{kN} + \sqrt{dkNT})$ regret, where $N$ is the number of rounds we play for each bandit. When $T$ is sufficiently large, our algorithm significantly outperforms the naive algorithm (playing $T$ bandits independently) that achieves $\widetilde{O}(T\sqrt{d N})$ regret. We also provide an $\Omega(T\sqrt{kN} + \sqrt{dkNT})$ regret lower bound, showing that our algorithm is minimax-optimal up to poly-logarithmic factors. Furthermore, we extend our algorithm to the infinite-action setting and obtain a corresponding regret bound which demonstrates the benefit of representation learning in certain regimes. We also present experiments on synthetic and real-world data to illustrate our theoretical findings and demonstrate the effectiveness of our proposed algorithms.
Cite
Text
Yang et al. "Impact of Representation Learning in Linear Bandits." International Conference on Learning Representations, 2021.Markdown
[Yang et al. "Impact of Representation Learning in Linear Bandits." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/yang2021iclr-impact/)BibTeX
@inproceedings{yang2021iclr-impact,
title = {{Impact of Representation Learning in Linear Bandits}},
author = {Yang, Jiaqi and Hu, Wei and Lee, Jason D. and Du, Simon Shaolei},
booktitle = {International Conference on Learning Representations},
year = {2021},
url = {https://mlanthology.org/iclr/2021/yang2021iclr-impact/}
}