Pure Exploration of Multi-Armed Bandit Under Matroid Constraints
Abstract
We study the pure exploration problem subject to a matroid constraint (Best-Basis) in a stochastic multi-armed bandit game. In a Best-Basis instance, we are given $n$ stochastic arms with unknown reward distributions, as well as a matroid $\mathcal{M}$ over the arms. Let the weight of an arm be the mean of its reward distribution. Our goal is to identify a basis of $\mathcal{M}$ with the maximum total weight, using as few samples as possible. The problem is a significant generalization of the best arm identification problem and the top-$k$ arm identification problem, which have attracted significant attentions in recent years. We study both the exact and PAC versions of Best-Basis, and provide algorithms with nearly-optimal sample complexities for these versions. Our results generalize and/or improve on several previous results for the top-$k$ arm identification problem and the combinatorial pure exploration problem when the combinatorial constraint is a matroid.
Cite
Text
Chen et al. "Pure Exploration of Multi-Armed Bandit Under Matroid Constraints." Annual Conference on Computational Learning Theory, 2016.Markdown
[Chen et al. "Pure Exploration of Multi-Armed Bandit Under Matroid Constraints." Annual Conference on Computational Learning Theory, 2016.](https://mlanthology.org/colt/2016/chen2016colt-pure/)BibTeX
@inproceedings{chen2016colt-pure,
title = {{Pure Exploration of Multi-Armed Bandit Under Matroid Constraints}},
author = {Chen, Lijie and Gupta, Anupam and Li, Jian},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2016},
pages = {647-669},
url = {https://mlanthology.org/colt/2016/chen2016colt-pure/}
}