Learning to Walk in the Real World with Minimal Human Effort
Abstract
Reliable and stable locomotion has been one of the most fundamental challenges for legged robots. Deep reinforcement learning (deep RL) has emerged as a promising method for developing such control policies autonomously. In this paper, we develop a system for learning legged locomotion policies with deep RL in the real world with minimal human effort. The key difficulties for on-robot learning systems are automatic data collection and safety. We overcome these two challenges by developing a multi-task learning procedure and a safety-constrained RL framework. We tested our system on the task of learning to walk on three different terrains: flat ground, a soft mattress, and a doormat with crevices. Our system can automatically and efficiently learn locomotion skills on a Minitaur robot with little human intervention.
Cite
Text
Ha et al. "Learning to Walk in the Real World with Minimal Human Effort." Conference on Robot Learning, 2020.Markdown
[Ha et al. "Learning to Walk in the Real World with Minimal Human Effort." Conference on Robot Learning, 2020.](https://mlanthology.org/corl/2020/ha2020corl-learning-a/)BibTeX
@inproceedings{ha2020corl-learning-a,
title = {{Learning to Walk in the Real World with Minimal Human Effort}},
author = {Ha, Sehoon and Xu, Peng and Tan, Zhenyu and Levine, Sergey and Tan, Jie},
booktitle = {Conference on Robot Learning},
year = {2020},
pages = {1110-1120},
volume = {155},
url = {https://mlanthology.org/corl/2020/ha2020corl-learning-a/}
}