On Effective Scheduling of Model-Based Reinforcement Learning

Abstract

Model-based reinforcement learning has attracted wide attention due to its superior sample efficiency. Despite its impressive success so far, it is still unclear how to appropriately schedule the important hyperparameters to achieve adequate performance, such as the real data ratio for policy optimization in Dyna-style model-based algorithms. In this paper, we first theoretically analyze the role of real data in policy training, which suggests that gradually increasing the ratio of real data yields better performance. Inspired by the analysis, we propose a framework named AutoMBPO to automatically schedule the real data ratio as well as other hyperparameters in training model-based policy optimization (MBPO) algorithm, a representative running case of model-based methods. On several continuous control tasks, the MBPO instance trained with hyperparameters scheduled by AutoMBPO can significantly surpass the original one, and the real data ratio schedule found by AutoMBPO shows consistency with our theoretical analysis.

Cite

Text

Lai et al. "On Effective Scheduling of Model-Based Reinforcement Learning." Neural Information Processing Systems, 2021.

Markdown

[Lai et al. "On Effective Scheduling of Model-Based Reinforcement Learning." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/lai2021neurips-effective/)

BibTeX

@inproceedings{lai2021neurips-effective,
  title     = {{On Effective Scheduling of Model-Based Reinforcement Learning}},
  author    = {Lai, Hang and Shen, Jian and Zhang, Weinan and Huang, Yimin and Zhang, Xing and Tang, Ruiming and Yu, Yong and Li, Zhenguo},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/lai2021neurips-effective/}
}