Chained Information-Theoretic Bounds and Tight Regret Rate for Linear Bandit Problems

Abstract

This paper studies the Bayesian regret of a variant of the Thompson Sampling algorithm for bandit problems. It builds upon the information-theoretic framework of [Russo & Van Roy, 2015] and, more specifically, on the rate-distortion analysis from [Dong & Van Roy, 2020], where they proved a bound with regret rate of $O(d\sqrt{T \log(T)})$ for the $d$-dimensional linear bandit setting. We focus on bandit problems with a metric action space, and, using a chaining argument, we establish new bounds that depend on the action space's metric entropy for a Thompson Sampling variant. Under suitable continuity assumption of the rewards, our bound offers a tight rate of $O(d\sqrt{T})$ for $d$-dimensional linear bandit problems.

Cite

Text

Gouverneur et al. "Chained Information-Theoretic Bounds and Tight Regret Rate for Linear Bandit Problems." ICML 2024 Workshops: RLControlTheory, 2024.

Markdown

[Gouverneur et al. "Chained Information-Theoretic Bounds and Tight Regret Rate for Linear Bandit Problems." ICML 2024 Workshops: RLControlTheory, 2024.](https://mlanthology.org/icmlw/2024/gouverneur2024icmlw-chained/)

BibTeX

@inproceedings{gouverneur2024icmlw-chained,
  title     = {{Chained Information-Theoretic Bounds and Tight Regret Rate for Linear Bandit Problems}},
  author    = {Gouverneur, Amaury and Gálvez, Borja Rodríguez and Oechtering, Tobias and Skoglund, Mikael},
  booktitle = {ICML 2024 Workshops: RLControlTheory},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/gouverneur2024icmlw-chained/}
}