Diffusion Prior for Online Decision Making: A Case Study of Thompson Sampling
Abstract
In this work, we investigate the possibility of using denoising diffusion models to learn priors for online decision making problems. Our special focus is on the meta-learning for bandit framework, with the goal of learning a strategy that performs well across bandit tasks of a same class. To this end, we train a diffusion model that learns the underlying task distribution and combine Thompson sampling with the learned prior to deal with new task at test time. Our posterior sampling algorithm is designed to carefully balance between the learned prior and the noisy observations that come from the learner's interaction with the environment. Preliminary experiments clearly demonstrate the potential of the considered approach.
Cite
Text
Hsieh et al. "Diffusion Prior for Online Decision Making: A Case Study of Thompson Sampling." NeurIPS 2022 Workshops: SBM, 2022.Markdown
[Hsieh et al. "Diffusion Prior for Online Decision Making: A Case Study of Thompson Sampling." NeurIPS 2022 Workshops: SBM, 2022.](https://mlanthology.org/neuripsw/2022/hsieh2022neuripsw-diffusion/)BibTeX
@inproceedings{hsieh2022neuripsw-diffusion,
title = {{Diffusion Prior for Online Decision Making: A Case Study of Thompson Sampling}},
author = {Hsieh, Yu-Guan and Kasiviswanathan, Shiva and Kveton, Branislav and Blöbaum, Patrick},
booktitle = {NeurIPS 2022 Workshops: SBM},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/hsieh2022neuripsw-diffusion/}
}