Indexed Minimum Empirical Divergence-Based Algorithms for Linear Bandits

Abstract

The Indexed Minimum Empirical Divergence (IMED) algorithm is a highly effective approach that offers a stronger theoretical guarantee of the asymptotic optimality compared to the Kullback--Leibler Upper Confidence Bound (KL-UCB) algorithm for the multi-armed bandit problem. Additionally, it has been observed to empirically outperform UCB-based algorithms and Thompson Sampling. Despite its effectiveness, the generalization of this algorithm to contextual bandits with linear payoffs has remained elusive. In this paper, we present novel linear versions of the IMED algorithm, which we call the family of LinIMED algorithms. We demonstrate that LinIMED provides a $\widetilde{O}(d\sqrt{T})$ upper regret bound where $d$ is the dimension of the context and $T$ is the time horizon. Furthermore, extensive empirical studies reveal that LinIMED and its variants outperform widely-used linear bandit algorithms such as LinUCB and Linear Thompson Sampling in some regimes.

Cite

Text

Bian and Tan. "Indexed Minimum Empirical Divergence-Based Algorithms for Linear Bandits." Transactions on Machine Learning Research, 2024.

Markdown

[Bian and Tan. "Indexed Minimum Empirical Divergence-Based Algorithms for Linear Bandits." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/bian2024tmlr-indexed/)

BibTeX

@article{bian2024tmlr-indexed,
  title     = {{Indexed Minimum Empirical Divergence-Based Algorithms for Linear Bandits}},
  author    = {Bian, Jie and Tan, Vincent Y. F.},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/bian2024tmlr-indexed/}
}