Near-Optimal Deployment Efficiency in Reward-Free Reinforcement Learning with Linear Function Approximation
Abstract
We study the problem of deployment efficient reinforcement learning (RL) with linear function approximation under the \emph{reward-free} exploration setting. This is a well-motivated problem because deploying new policies is costly in real-life RL applications. Under the linear MDP setting with feature dimension $d$ and planning horizon $H$, we propose a new algorithm that collects at most $\widetilde{O}(\frac{d^2H^5}{\epsilon^2})$ trajectories within $H$ deployments to identify $\epsilon$-optimal policy for any (possibly data-dependent) choice of reward functions. To the best of our knowledge, our approach is the first to achieve optimal deployment complexity and optimal $d$ dependence in sample complexity at the same time, even if the reward is known ahead of time. Our novel techniques include an exploration-preserving policy discretization and a generalized G-optimal experiment design, which could be of independent interest.
Cite
Text
Qiao and Wang. "Near-Optimal Deployment Efficiency in Reward-Free Reinforcement Learning with Linear Function Approximation." NeurIPS 2022 Workshops: Offline_RL, 2022.Markdown
[Qiao and Wang. "Near-Optimal Deployment Efficiency in Reward-Free Reinforcement Learning with Linear Function Approximation." NeurIPS 2022 Workshops: Offline_RL, 2022.](https://mlanthology.org/neuripsw/2022/qiao2022neuripsw-nearoptimal/)BibTeX
@inproceedings{qiao2022neuripsw-nearoptimal,
title = {{Near-Optimal Deployment Efficiency in Reward-Free Reinforcement Learning with Linear Function Approximation}},
author = {Qiao, Dan and Wang, Yu-Xiang},
booktitle = {NeurIPS 2022 Workshops: Offline_RL},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/qiao2022neuripsw-nearoptimal/}
}