Walk These Ways: Tuning Robot Control for Generalization with Multiplicity of Behavior
Abstract
Learned locomotion policies can rapidly adapt to diverse environments similar to those experienced during training but lack a mechanism for fast tuning when they fail in an out-of-distribution test environment. This necessitates a slow and iterative cycle of reward and environment redesign to achieve good performance on a new task. As an alternative, we propose learning a single policy that encodes a structured family of locomotion strategies that solve training tasks in different ways, resulting in Multiplicity of Behavior (MoB). Different strategies generalize differently and can be chosen in real-time for new tasks or environments, bypassing the need for time-consuming retraining. We release a fast, robust open-source MoB locomotion controller, Walk These Ways, that can execute diverse gaits with variable footswing, posture, and speed, unlocking diverse downstream tasks: crouching, hopping, high-speed running, stair traversal, bracing against shoves, rhythmic dance, and more. Video and code release: https://gmargo11.github.io/walk-these-ways
Cite
Text
Margolis and Agrawal. "Walk These Ways: Tuning Robot Control for Generalization with Multiplicity of Behavior." Conference on Robot Learning, 2022.Markdown
[Margolis and Agrawal. "Walk These Ways: Tuning Robot Control for Generalization with Multiplicity of Behavior." Conference on Robot Learning, 2022.](https://mlanthology.org/corl/2022/margolis2022corl-walk/)BibTeX
@inproceedings{margolis2022corl-walk,
title = {{Walk These Ways: Tuning Robot Control for Generalization with Multiplicity of Behavior}},
author = {Margolis, Gabriel B. and Agrawal, Pulkit},
booktitle = {Conference on Robot Learning},
year = {2022},
pages = {22-31},
volume = {205},
url = {https://mlanthology.org/corl/2022/margolis2022corl-walk/}
}