Nearly Instance Optimal Sample Complexity Bounds for Top-K Arm Selection
Abstract
In the Best-$k$-Arm problem, we are given $n$ stochastic bandit arms, each associated with an unknown reward distribution. We are required to identify the $k$ arms with the largest means by taking as few samples as possible. In this paper, we make progress towards a complete characterization of the instance-wise sample complexity bounds for the Best-$k$-Arm problem. On the lower bound side, we obtain a novel complexity term to measure the sample complexity that every Best-$k$-Arm instance requires. This is derived by an interesting and nontrivial reduction from the Best-$1$-Arm problem. We also provide an elimination-based algorithm that matches the instance-wise lower bound within doubly-logarithmic factors. The sample complexity of our algorithm strictly dominates the state-of-the-art for Best-$k$-Arm (module constant factors).
Cite
Text
Chen et al. "Nearly Instance Optimal Sample Complexity Bounds for Top-K Arm Selection." International Conference on Artificial Intelligence and Statistics, 2017.Markdown
[Chen et al. "Nearly Instance Optimal Sample Complexity Bounds for Top-K Arm Selection." International Conference on Artificial Intelligence and Statistics, 2017.](https://mlanthology.org/aistats/2017/chen2017aistats-nearly/)BibTeX
@inproceedings{chen2017aistats-nearly,
title = {{Nearly Instance Optimal Sample Complexity Bounds for Top-K Arm Selection}},
author = {Chen, Lijie and Li, Jian and Qiao, Mingda},
booktitle = {International Conference on Artificial Intelligence and Statistics},
year = {2017},
pages = {101-110},
url = {https://mlanthology.org/aistats/2017/chen2017aistats-nearly/}
}