Efficient Online Bayesian Inference for Neural Bandits
Abstract
In this paper we present a new algorithm for online (sequential) inference in Bayesian neural networks, and show its suitability for tackling contextual bandit problems. The key idea is to combine the extended Kalman filter (which locally linearizes the likelihood function at each time step) with a (learned or random) low-dimensional affine subspace for the parameters; the use of a subspace enables us to scale our algorithm to models with $\sim 1M$ parameters. While most other neural bandit methods need to store the entire past dataset in order to avoid the problem of “catastrophic forgetting”, our approach uses constant memory. This is possible because we represent uncertainty about all the parameters in the model, not just the final linear layer. We show good results on the “Deep Bayesian Bandit Showdown” benchmark, as well as MNIST and a recommender system.
Cite
Text
Duran-Martin et al. "Efficient Online Bayesian Inference for Neural Bandits." Artificial Intelligence and Statistics, 2022.Markdown
[Duran-Martin et al. "Efficient Online Bayesian Inference for Neural Bandits." Artificial Intelligence and Statistics, 2022.](https://mlanthology.org/aistats/2022/duranmartin2022aistats-efficient/)BibTeX
@inproceedings{duranmartin2022aistats-efficient,
title = {{Efficient Online Bayesian Inference for Neural Bandits}},
author = {Duran-Martin, Gerardo and Kara, Aleyna and Murphy, Kevin},
booktitle = {Artificial Intelligence and Statistics},
year = {2022},
pages = {6002-6021},
volume = {151},
url = {https://mlanthology.org/aistats/2022/duranmartin2022aistats-efficient/}
}