Integrating a Partial Model into Model Free Reinforcement Learning

Abstract

In reinforcement learning an agent uses online feedback from the environment in order to adaptively select an effective policy. Model free approaches address this task by directly mapping environmental states to actions, while model based methods attempt to construct a model of the environment, followed by a selection of optimal actions based on that model. Given the complementary advantages of both approaches, we suggest a novel procedure which augments a model free algorithm with a partial model. The resulting hybrid algorithm switches between a model based and a model free mode, depending on the current state and the agent's knowledge. Our method relies on a novel definition for a partially known model, and an estimator that incorporates such knowledge in order to reduce uncertainty in stochastic approximation iterations. We prove that such an approach leads to improved policy evaluation whenever environmental knowledge is available, without compromising performance when such knowledge is absent. Numerical simulations demonstrate the effectiveness of the approach on policy gradient and Q-learning algorithms, and its usefulness in solving a call admission control problem.

Cite

Text

Tamar et al. "Integrating a Partial Model into Model Free Reinforcement Learning." Journal of Machine Learning Research, 2012.

Markdown

[Tamar et al. "Integrating a Partial Model into Model Free Reinforcement Learning." Journal of Machine Learning Research, 2012.](https://mlanthology.org/jmlr/2012/tamar2012jmlr-integrating/)

BibTeX

@article{tamar2012jmlr-integrating,
  title     = {{Integrating a Partial Model into Model Free Reinforcement Learning}},
  author    = {Tamar, Aviv and Di Castro, Dotan and Meir, Ron},
  journal   = {Journal of Machine Learning Research},
  year      = {2012},
  pages     = {1927-1966},
  volume    = {13},
  url       = {https://mlanthology.org/jmlr/2012/tamar2012jmlr-integrating/}
}