Thompson Sampling for High-Dimensional Sparse Linear Contextual Bandits

Abstract

We consider the stochastic linear contextual bandit problem with high-dimensional features. We analyze the Thompson sampling algorithm using special classes of sparsity-inducing priors (e.g., spike-and-slab) to model the unknown parameter and provide a nearly optimal upper bound on the expected cumulative regret. To the best of our knowledge, this is the first work that provides theoretical guarantees of Thompson sampling in high-dimensional and sparse contextual bandits. For faster computation, we use variational inference instead of Markov Chain Monte Carlo (MCMC) to approximate the posterior distribution. Extensive simulations demonstrate the improved performance of our proposed algorithm over existing ones.

Cite

Text

Chakraborty et al. "Thompson Sampling for High-Dimensional Sparse Linear Contextual Bandits." International Conference on Machine Learning, 2023.

Markdown

[Chakraborty et al. "Thompson Sampling for High-Dimensional Sparse Linear Contextual Bandits." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/chakraborty2023icml-thompson/)

BibTeX

@inproceedings{chakraborty2023icml-thompson,
  title     = {{Thompson Sampling for High-Dimensional Sparse Linear Contextual Bandits}},
  author    = {Chakraborty, Sunrit and Roy, Saptarshi and Tewari, Ambuj},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {3979-4008},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/chakraborty2023icml-thompson/}
}