Model-Based Reinforcement Learning with Value-Targeted Regression

Abstract

Reinforcement learning (RL) applies to control problems with large state and action spaces, hence it is natural to consider RL with a parametric model. In this paper we focus on finite-horizon episodic RL where the transition model admits the linear parametrization: $P = \sum_{i=1}^{d} (\theta)_{i}P_{i}$. This parametrization provides a universal function approximation and capture several useful models and applications. We propose an upper confidence model-based RL algorithm with value-targeted model parameter estimation. The algorithm updates the estimate of $\theta$ by recursively solving a regression problem using the latest value estimate as the target. We demonstrate the efficiency of our algorithm by proving its expected regret bound $\tilde{\mathcal{O}}(d\sqrt{H^{3}T})$, where $H, T, d$ are the horizon, total number of steps and dimension of $\theta$. This regret bound is independent of the total number of states or actions, and is close to a lower bound $\Omega(\sqrt{HdT})$.

Cite

Text

Jia et al. "Model-Based Reinforcement Learning with Value-Targeted Regression." Proceedings of the 2nd Conference on Learning for Dynamics and Control, 2020.

Markdown

[Jia et al. "Model-Based Reinforcement Learning with Value-Targeted Regression." Proceedings of the 2nd Conference on Learning for Dynamics and Control, 2020.](https://mlanthology.org/l4dc/2020/jia2020l4dc-modelbased/)

BibTeX

@inproceedings{jia2020l4dc-modelbased,
  title     = {{Model-Based Reinforcement Learning with Value-Targeted Regression}},
  author    = {Jia, Zeyu and Yang, Lin and Szepesvari, Csaba and Wang, Mengdi},
  booktitle = {Proceedings of the 2nd Conference on Learning for Dynamics and Control},
  year      = {2020},
  pages     = {666-686},
  volume    = {120},
  url       = {https://mlanthology.org/l4dc/2020/jia2020l4dc-modelbased/}
}