Inferential Induction: A Novel Framework for Bayesian Reinforcement Learning
Abstract
Bayesian Reinforcement Learning (BRL) offers a decision-theoretic solution to the reinforcement learning problem. While ''model-based'' BRL algorithms have focused either on maintaining a posterior distribution on models, BRL ''model-free'' methods try to estimate value function distributions but make strong implicit assumptions or approximations. We describe a novel Bayesian framework, \emph{inferential induction}, for correctly inferring value function distributions from data, which leads to a new family of BRL algorithms. We design an algorithm, Bayesian Backwards Induction (BBI), with this framework. We experimentally demonstrate that BBI is competitive with the state of the art. However, its advantage relative to existing BRL model-free methods is not as great as we have expected, particularly when the additional computational burden is taken into account.
Cite
Text
Jorge et al. "Inferential Induction: A Novel Framework for Bayesian Reinforcement Learning." NeurIPS 2020 Workshops: ICBINB, 2020.Markdown
[Jorge et al. "Inferential Induction: A Novel Framework for Bayesian Reinforcement Learning." NeurIPS 2020 Workshops: ICBINB, 2020.](https://mlanthology.org/neuripsw/2020/jorge2020neuripsw-inferential/)BibTeX
@inproceedings{jorge2020neuripsw-inferential,
title = {{Inferential Induction: A Novel Framework for Bayesian Reinforcement Learning}},
author = {Jorge, Emilio and Eriksson, Hannes and Dimitrakakis, Christos and Basu, Debabrota and Grover, Divya},
booktitle = {NeurIPS 2020 Workshops: ICBINB},
year = {2020},
url = {https://mlanthology.org/neuripsw/2020/jorge2020neuripsw-inferential/}
}