Almost Optimal Batch-Regret Tradeoff for Batch Linear Contextual Bandits

Abstract

We study the optimal batch-regret tradeoff for batch linear contextual bandits. For this problem, we design batch learning algorithms and prove that they achieve the optimal regret bounds (up to logarithmic factors) for any batch number $M$, number of actions $K$, time horizon $T$, and dimension $d$. Therefore, we establish the \emph{full-parameter-range} (almost) optimal batch-regret tradeoff for the batch linear contextual bandit problem. Along our analysis, we also prove a new matrix concentration inequality with dependence on their dynamic upper bounds, which, to the best of our knowledge, is the first of its kind in literature and maybe of independent interest.

Cite

Text

Zhang et al. "Almost Optimal Batch-Regret Tradeoff for Batch Linear Contextual Bandits." International Conference on Learning Representations, 2025.

Markdown

[Zhang et al. "Almost Optimal Batch-Regret Tradeoff for Batch Linear Contextual Bandits." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/zhang2025iclr-almost/)

BibTeX

@inproceedings{zhang2025iclr-almost,
  title     = {{Almost Optimal Batch-Regret Tradeoff for Batch Linear Contextual Bandits}},
  author    = {Zhang, Zihan and Ji, Xiangyang and Zhou, Yuan},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/zhang2025iclr-almost/}
}