Online Learning in Contextual Bandits Using Gated Linear Networks

Abstract

We introduce a new and completely online contextual bandit algorithm called Gated Linear Contextual Bandits (GLCB). This algorithm is based on Gated Linear Networks (GLNs), a recently introduced deep learning architecture with properties well-suited to the online setting. Leveraging data-dependent gating properties of the GLN we are able to estimate prediction uncertainty with effectively zero algorithmic overhead. We empirically evaluate GLCB compared to 9 state-of-the-art algorithms that leverage deep neural networks, on a standard benchmark suite of discrete and continuous contextual bandit problems. GLCB obtains mean first-place despite being the only online method, and we further support these results with a theoretical study of its convergence properties.

Cite

Text

Sezener et al. "Online Learning in Contextual Bandits Using Gated Linear Networks." Neural Information Processing Systems, 2020.

Markdown

[Sezener et al. "Online Learning in Contextual Bandits Using Gated Linear Networks." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/sezener2020neurips-online/)

BibTeX

@inproceedings{sezener2020neurips-online,
  title     = {{Online Learning in Contextual Bandits Using Gated Linear Networks}},
  author    = {Sezener, Eren and Hutter, Marcus and Budden, David and Wang, Jianan and Veness, Joel},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/sezener2020neurips-online/}
}