Multi-Agent Manipulation via Locomotion Using Hierarchical Sim2Real
Abstract
Manipulation and locomotion are closely related problems that are often studied in isolation. In this work, we study the problem of coordinating multiple mobile agents to exhibit manipulation behaviors using a reinforcement learning (RL) approach. Our method hinges on the use of hierarchical sim2real – a simulated environment is used to learn low-level goal-reaching skills, which are then used as the action space for a high-level RL controller, also trained in simulation. The full hierarchical policy is then transferred to the real world in a zero-shot fashion. The application of domain randomization during training enables the learned behaviors to generalize to real-world settings, while the use of hierarchy provides a modular paradigm for learning and transferring increasingly complex behaviors. We evaluate our method on a number of real-world tasks, including coordinated object manipulation in a multi-agent setting.
Cite
Text
Nachum et al. "Multi-Agent Manipulation via Locomotion Using Hierarchical Sim2Real." Conference on Robot Learning, 2019.Markdown
[Nachum et al. "Multi-Agent Manipulation via Locomotion Using Hierarchical Sim2Real." Conference on Robot Learning, 2019.](https://mlanthology.org/corl/2019/nachum2019corl-multiagent/)BibTeX
@inproceedings{nachum2019corl-multiagent,
title = {{Multi-Agent Manipulation via Locomotion Using Hierarchical Sim2Real}},
author = {Nachum, Ofir and Ahn, Michael and Ponte, Hugo and Gu, Shixiang and Kumar, Vikash},
booktitle = {Conference on Robot Learning},
year = {2019},
pages = {110-121},
volume = {100},
url = {https://mlanthology.org/corl/2019/nachum2019corl-multiagent/}
}