Online Posterior Sampling with a Diffusion Prior

Abstract

Posterior sampling in contextual bandits with a Gaussian prior can be implemented exactly or approximately using the Laplace approximation. The Gaussian prior is computationally efficient but it cannot describe complex distributions. In this work, we propose approximate posterior sampling algorithms for contextual bandits with a diffusion model prior. The key idea is to sample from a chain of approximate conditional posteriors, one for each stage of the reverse diffusion process, which are obtained by the Laplace approximation. Our approximations are motivated by posterior sampling with a Gaussian prior, and inherit its simplicity and efficiency. They are asymptotically consistent and perform well empirically on a variety of contextual bandit problems.

Cite

Text

Kveton et al. "Online Posterior Sampling with a Diffusion Prior." Neural Information Processing Systems, 2024. doi:10.52202/079017-4146

Markdown

[Kveton et al. "Online Posterior Sampling with a Diffusion Prior." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/kveton2024neurips-online/) doi:10.52202/079017-4146

BibTeX

@inproceedings{kveton2024neurips-online,
  title     = {{Online Posterior Sampling with a Diffusion Prior}},
  author    = {Kveton, Branislav and Oreshkin, Boris N. and Park, Youngsuk and Deshmukh, Aniket and Song, Rui},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-4146},
  url       = {https://mlanthology.org/neurips/2024/kveton2024neurips-online/}
}