Principled Exploration via Optimistic Bootstrapping and Backward Induction

Abstract

One principled approach for provably efficient exploration is incorporating the upper confidence bound (UCB) into the value function as a bonus. However, UCB is specified to deal with linear and tabular settings and is incompatible with Deep Reinforcement Learning (DRL). In this paper, we propose a principled exploration method for DRL through Optimistic Bootstrapping and Backward Induction (OB2I). OB2I constructs a general-purpose UCB-bonus through non-parametric bootstrap in DRL. The UCB-bonus estimates the epistemic uncertainty of state-action pairs for optimistic exploration. We build theoretical connections between the proposed UCB-bonus and the LSVI-UCB in linear setting. We propagate future uncertainty in a time-consistent manner through episodic backward update, which exploits the theoretical advantage and empirically improves the sample-efficiency. Our experiments in MNIST maze and Atari suit suggest that OB2I outperforms several state-of-the-art exploration approaches.

Cite

Text

Bai et al. "Principled Exploration via Optimistic Bootstrapping and Backward Induction." International Conference on Machine Learning, 2021.

Markdown

[Bai et al. "Principled Exploration via Optimistic Bootstrapping and Backward Induction." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/bai2021icml-principled/)

BibTeX

@inproceedings{bai2021icml-principled,
  title     = {{Principled Exploration via Optimistic Bootstrapping and Backward Induction}},
  author    = {Bai, Chenjia and Wang, Lingxiao and Han, Lei and Hao, Jianye and Garg, Animesh and Liu, Peng and Wang, Zhaoran},
  booktitle = {International Conference on Machine Learning},
  year      = {2021},
  pages     = {577-587},
  volume    = {139},
  url       = {https://mlanthology.org/icml/2021/bai2021icml-principled/}
}