Evaluating Model-Based Planning and Planner Amortization for Continuous Control

Abstract

There is a widespread intuition that model-based control methods should be able to surpass the data efficiency of model-free approaches. In this paper we attempt to evaluate this intuition on various challenging locomotion tasks. We take a hybrid approach, combining model predictive control (MPC) with a learned model and model-free policy learning; the learned policy serves as a proposal for MPC. We show that MPC with learned proposals and models (trained on the fly or transferred from related tasks) can significantly improve performance and data efficiency with respect to model-free methods. However, we find that well-tuned model-free agents are strong baselines even for high DoF control problems. Finally, we show that it is possible to distil a model-based planner into a policy that amortizes the planning computation without any loss of performance.

Cite

Text

Byravan et al. "Evaluating Model-Based Planning and Planner Amortization for Continuous Control." International Conference on Learning Representations, 2022.

Markdown

[Byravan et al. "Evaluating Model-Based Planning and Planner Amortization for Continuous Control." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/byravan2022iclr-evaluating/)

BibTeX

@inproceedings{byravan2022iclr-evaluating,
  title     = {{Evaluating Model-Based Planning and Planner Amortization for Continuous Control}},
  author    = {Byravan, Arunkumar and Hasenclever, Leonard and Trochim, Piotr and Mirza, Mehdi and Ialongo, Alessandro Davide and Tassa, Yuval and Springenberg, Jost Tobias and Abdolmaleki, Abbas and Heess, Nicolas and Merel, Josh and Riedmiller, Martin},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/byravan2022iclr-evaluating/}
}