Differentiable MPC for End-to-End Planning and Control
Abstract
We present foundations for using Model Predictive Control (MPC) as a differentiable policy class for reinforcement learning. This provides one way of leveraging and combining the advantages of model-free and model-based approaches. Specifically, we differentiate through MPC by using the KKT conditions of the convex approximation at a fixed point of the controller. Using this strategy, we are able to learn the cost and dynamics of a controller via end-to-end learning. Our experiments focus on imitation learning in the pendulum and cartpole domains, where we learn the cost and dynamics terms of an MPC policy class. We show that our MPC policies are significantly more data-efficient than a generic neural network and that our method is superior to traditional system identification in a setting where the expert is unrealizable.
Cite
Text
Amos et al. "Differentiable MPC for End-to-End Planning and Control." Neural Information Processing Systems, 2018.Markdown
[Amos et al. "Differentiable MPC for End-to-End Planning and Control." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/amos2018neurips-differentiable/)BibTeX
@inproceedings{amos2018neurips-differentiable,
title = {{Differentiable MPC for End-to-End Planning and Control}},
author = {Amos, Brandon and Jimenez, Ivan and Sacks, Jacob and Boots, Byron and Kolter, J. Zico},
booktitle = {Neural Information Processing Systems},
year = {2018},
pages = {8289-8300},
url = {https://mlanthology.org/neurips/2018/amos2018neurips-differentiable/}
}