Thompson Sampling and Approximate Inference

Abstract

We study the effects of approximate inference on the performance of Thompson sampling in the $k$-armed bandit problems. Thompson sampling is a successful algorithm for online decision-making but requires posterior inference, which often must be approximated in practice. We show that even small constant inference error (in $\alpha$-divergence) can lead to poor performance (linear regret) due to under-exploration (for $\alpha<1$) or over-exploration (for $\alpha>0$) by the approximation. While for $\alpha > 0$ this is unavoidable, for $\alpha \leq 0$ the regret can be improved by adding a small amount of forced exploration even when the inference error is a large constant.

Cite

Text

Phan et al. "Thompson Sampling and Approximate Inference." Neural Information Processing Systems, 2019.

Markdown

[Phan et al. "Thompson Sampling and Approximate Inference." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/phan2019neurips-thompson/)

BibTeX

@inproceedings{phan2019neurips-thompson,
  title     = {{Thompson Sampling and Approximate Inference}},
  author    = {Phan, My and Yadkori, Yasin Abbasi and Domke, Justin},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {8804-8813},
  url       = {https://mlanthology.org/neurips/2019/phan2019neurips-thompson/}
}