When to Trust Your Model: Model-Based Policy Optimization

Abstract

Designing effective model-based reinforcement learning algorithms is difficult because the ease of data generation must be weighed against the bias of model-generated data. In this paper, we study the role of model usage in policy optimization both theoretically and empirically. We first formulate and analyze a model-based reinforcement learning algorithm with a guarantee of monotonic improvement at each step. In practice, this analysis is overly pessimistic and suggests that real off-policy data is always preferable to model-generated on-policy data, but we show that an empirical estimate of model generalization can be incorporated into such analysis to justify model usage. Motivated by this analysis, we then demonstrate that a simple procedure of using short model-generated rollouts branched from real data has the benefits of more complicated model-based algorithms without the usual pitfalls. In particular, this approach surpasses the sample efficiency of prior model-based methods, matches the asymptotic performance of the best model-free algorithms, and scales to horizons that cause other model-based methods to fail entirely.

Cite

Text

Janner et al. "When to Trust Your Model: Model-Based Policy Optimization." Neural Information Processing Systems, 2019.

Markdown

[Janner et al. "When to Trust Your Model: Model-Based Policy Optimization." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/janner2019neurips-trust/)

BibTeX

@inproceedings{janner2019neurips-trust,
  title     = {{When to Trust Your Model: Model-Based Policy Optimization}},
  author    = {Janner, Michael and Fu, Justin and Zhang, Marvin and Levine, Sergey},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {12519-12530},
  url       = {https://mlanthology.org/neurips/2019/janner2019neurips-trust/}
}