On the Performance of Thompson Sampling on Logistic Bandits
Abstract
We study the logistic bandit, in which rewards are binary with success probability $\exp(\beta a^\top \theta) / (1 + \exp(\beta a^\top \theta))$ and actions $a$ and coefficients $\theta$ are within the $d$-dimensional unit ball. While prior regret bounds for algorithms that address the logistic bandit exhibit exponential dependence on the slope parameter $\beta$, we establish a regret bound for Thompson sampling that is independent of $\beta$. Specifically, we establish that, when the set of feasible actions is identical to the set of possible coefficient vectors, the Bayesian regret of Thompson sampling is $\tilde{O}(d\sqrt{T})$. We also establish a $\tilde{O}(\sqrt{d\eta T}/\Delta)$ bound that applies more broadly, where $\Delta$ is the worst-case optimal log-odds and $\eta$ is the “fragility dimension,” a new statistic we define to capture the degree to which an optimal action for one model fails to satisfice for others. We demonstrate that the fragility dimension plays an essential role by showing that, for any $\epsilon > 0$, no algorithm can achieve $\mathrm{poly}(d, 1/\Delta)\cdot T^{1-\epsilon}$ regret.
Cite
Text
Dong et al. "On the Performance of Thompson Sampling on Logistic Bandits." Conference on Learning Theory, 2019.Markdown
[Dong et al. "On the Performance of Thompson Sampling on Logistic Bandits." Conference on Learning Theory, 2019.](https://mlanthology.org/colt/2019/dong2019colt-performance/)BibTeX
@inproceedings{dong2019colt-performance,
title = {{On the Performance of Thompson Sampling on Logistic Bandits}},
author = {Dong, Shi and Ma, Tengyu and Van Roy, Benjamin},
booktitle = {Conference on Learning Theory},
year = {2019},
pages = {1158-1160},
volume = {99},
url = {https://mlanthology.org/colt/2019/dong2019colt-performance/}
}