Bridging Imitation and Online Reinforcement Learning: An Optimistic Tale
Abstract
In this paper, we address the following problem: Given an offline demonstration dataset from an imperfect expert, what is the best way to leverage it to bootstrap online learning performance in MDPs. We first propose an Informed Posterior Sampling-based RL (iPSRL) algorithm that uses the offline dataset, and information about the expert's behavioral policy used to generate the offline dataset. Its cumulative Bayesian regret goes down to zero exponentially fast in $N$, the offline dataset size if the expert is competent enough. Since this algorithm is computationally impractical, we then propose the iRLSVI algorithm that can be seen as a combination of the RLSVI algorithm for online RL, and imitation learning. Our empirical results show that the proposed iRLSVI algorithm is able to achieve significant reduction in regret as compared to two baselines: no offline data, and offline dataset but used without suitably modeling the generative policy. Our algorithm can be seen as bridging online RL and imitation learning.
Cite
Text
Hao et al. "Bridging Imitation and Online Reinforcement Learning: An Optimistic Tale." Transactions on Machine Learning Research, 2023.Markdown
[Hao et al. "Bridging Imitation and Online Reinforcement Learning: An Optimistic Tale." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/hao2023tmlr-bridging/)BibTeX
@article{hao2023tmlr-bridging,
title = {{Bridging Imitation and Online Reinforcement Learning: An Optimistic Tale}},
author = {Hao, Botao and Jain, Rahul and Tang, Dengwang and Wen, Zheng},
journal = {Transactions on Machine Learning Research},
year = {2023},
url = {https://mlanthology.org/tmlr/2023/hao2023tmlr-bridging/}
}