Data Efficient Reinforcement Learning for Legged Robots
Abstract
We present a model-based reinforcement learning framework for robot locomotion that achieves walking based on only 4.5 minutes of data collected on a quadruped robot. To accurately model the robot’s dynamics over a long horizon, we introduce a loss function that tracks the model’s prediction over multiple timesteps. We adapt model predictive control to account for planning latency, which allows the learned model to be used for real time control. Additionally, to ensure safe exploration during model learning, we embed prior knowledge of leg trajectories into the action space. The resulting system achieves fast and robust locomotion. Unlike model-free methods, which optimize for a particular task, our planner can use the same learned dynamics for various tasks, simply by changing the reward function.1 To the best of our knowledge, our approach is more than an order of magnitude more sample efficient than current model-free methods.
Cite
Text
Yang et al. "Data Efficient Reinforcement Learning for Legged Robots." Conference on Robot Learning, 2019.Markdown
[Yang et al. "Data Efficient Reinforcement Learning for Legged Robots." Conference on Robot Learning, 2019.](https://mlanthology.org/corl/2019/yang2019corl-data/)BibTeX
@inproceedings{yang2019corl-data,
title = {{Data Efficient Reinforcement Learning for Legged Robots}},
author = {Yang, Yuxiang and Caluwaerts, Ken and Iscen, Atil and Zhang, Tingnan and Tan, Jie and Sindhwani, Vikas},
booktitle = {Conference on Robot Learning},
year = {2019},
pages = {1-10},
volume = {100},
url = {https://mlanthology.org/corl/2019/yang2019corl-data/}
}