Perturbed-History Exploration in Stochastic Linear Bandits
Abstract
We propose a new online algorithm for cumulative regret minimization in a stochastic linear bandit. The algorithm pulls the arm with the highest estimated reward in a linear model trained on its perturbed history. Therefore, we call it perturbed-history exploration in a linear bandit (LinPHE). The perturbed history is a mixture of observed rewards and randomly generated i.i.d. pseudo-rewards. We derive a $\tilde{O}(d \sqrt{n})$ gap-free bound on the $n$-round regret of LinPHE, where $d$ is the number of features. The key steps in our analysis are new concentration and anti-concentration bounds on the weighted sum of Bernoulli random variables. To show the generality of our design, we generalize LinPHE to a logistic model. We evaluate our algorithms empirically and show that they are practical.
Cite
Text
Kveton et al. "Perturbed-History Exploration in Stochastic Linear Bandits." Uncertainty in Artificial Intelligence, 2019.Markdown
[Kveton et al. "Perturbed-History Exploration in Stochastic Linear Bandits." Uncertainty in Artificial Intelligence, 2019.](https://mlanthology.org/uai/2019/kveton2019uai-perturbedhistory/)BibTeX
@inproceedings{kveton2019uai-perturbedhistory,
title = {{Perturbed-History Exploration in Stochastic Linear Bandits}},
author = {Kveton, Branislav and Szepesvári, Csaba and Ghavamzadeh, Mohammad and Boutilier, Craig},
booktitle = {Uncertainty in Artificial Intelligence},
year = {2019},
pages = {530-540},
volume = {115},
url = {https://mlanthology.org/uai/2019/kveton2019uai-perturbedhistory/}
}