On the Asymptotic Optimality of Confidence Interval Based Algorithms for Fixed Confidence MABs

Abstract

In this work, we address the challenge of identifying the optimal arm in a stochastic multi-armed bandit scenario with the minimum number of arm pulls, given a predefined error probability in a fixed confidence setting. Our focus is on examining the asymptotic behavior of sample complexity and the distribution of arm weights upon termination, as the error threshold is scaled to zero, under confidence-interval based algorithms. Specifically, we analyze the asymptotic sample complexity and termination weight fractions for the well-known LUCB algorithm, and introduce a new variant, the LUCB Greedy algorithm. We demonstrate that the upper bounds on the sample complexities for both algorithms are asymptotically within a constant factor of the established lower bounds.

Cite

Text

Kejriwal et al. "On the Asymptotic Optimality of Confidence Interval Based Algorithms for Fixed Confidence MABs." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I17.33959

Markdown

[Kejriwal et al. "On the Asymptotic Optimality of Confidence Interval Based Algorithms for Fixed Confidence MABs." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/kejriwal2025aaai-asymptotic/) doi:10.1609/AAAI.V39I17.33959

BibTeX

@inproceedings{kejriwal2025aaai-asymptotic,
  title     = {{On the Asymptotic Optimality of Confidence Interval Based Algorithms for Fixed Confidence MABs}},
  author    = {Kejriwal, Kushal and Karamchandani, Nikhil and Nair, Jayakrishnan},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {17814-17821},
  doi       = {10.1609/AAAI.V39I17.33959},
  url       = {https://mlanthology.org/aaai/2025/kejriwal2025aaai-asymptotic/}
}