Optimistic Posterior Sampling for Reinforcement Learning with Few Samples and Tight Guarantees
Abstract
We consider reinforcement learning in an environment modeled by an episodic, tabular, step-dependent Markov decision process of horizon $H$ with $S$ states, and $A$ actions. The performance of an agent is measured by the regret after interacting with the environment for $T$ episodes. We propose an optimistic posterior sampling algorithm for reinforcement learning (OPSRL), a simple variant of posterior sampling that only needs a number of posterior samples logarithmic in $H$, $S$, $A$, and $T$ per state-action pair. For OPSRL we guarantee a high-probability regret bound of order at most $O(\sqrt{H^3SAT})$ ignoring $\text{poly}\log(HSAT)$ terms. The key novel technical ingredient is a new sharp anti-concentration inequality for linear forms of a Dirichlet random vector which may be of independent interest. Specifically, we extend the normal approximation-based lower bound for Beta distributions by Alfers and Dinges (1984) to Dirichlet distributions. Our bound matches the lower bound of order $\Omega(\sqrt{H^3SAT})$, thereby answering the open problems raised by Agrawal and Jia (2017) for the episodic setting.
Cite
Text
Tiapkin et al. "Optimistic Posterior Sampling for Reinforcement Learning with Few Samples and Tight Guarantees." Neural Information Processing Systems, 2022.Markdown
[Tiapkin et al. "Optimistic Posterior Sampling for Reinforcement Learning with Few Samples and Tight Guarantees." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/tiapkin2022neurips-optimistic/)BibTeX
@inproceedings{tiapkin2022neurips-optimistic,
title = {{Optimistic Posterior Sampling for Reinforcement Learning with Few Samples and Tight Guarantees}},
author = {Tiapkin, Daniil and Belomestny, Denis and Calandriello, Daniele and Moulines, Eric and Munos, Remi and Naumov, Alexey and Rowland, Mark and Valko, Michal and Ménard, Pierre},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/tiapkin2022neurips-optimistic/}
}