Policy Finetuning in Reinforcement Learning via Design of Experiments Using Offline Data

Abstract

In some applications of reinforcement learning, a dataset of pre-collected experience is already availablebut it is also possible to acquire some additional online data to help improve the quality of the policy.However, it may be preferable to gather additional data with a single, non-reactive exploration policyand avoid the engineering costs associated with switching policies. In this paper we propose an algorithm with provable guarantees that can leverage an offline dataset to design a single non-reactive policy for exploration. We theoretically analyze the algorithm and measure the quality of the final policy as a function of the local coverage of the original dataset and the amount of additional data collected.

Cite

Text

Zhang and Zanette. "Policy Finetuning in Reinforcement Learning via Design of Experiments Using Offline Data." Neural Information Processing Systems, 2023.

Markdown

[Zhang and Zanette. "Policy Finetuning in Reinforcement Learning via Design of Experiments Using Offline Data." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/zhang2023neurips-policy/)

BibTeX

@inproceedings{zhang2023neurips-policy,
  title     = {{Policy Finetuning in Reinforcement Learning via Design of Experiments Using Offline Data}},
  author    = {Zhang, Ruiqi and Zanette, Andrea},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/zhang2023neurips-policy/}
}