A Framework for Efficient Robotic Manipulation

Abstract

Recent advances in unsupervised representation learning significantly improved the sample efficiency of training Reinforcement Learning policies in simulated environments. However, similar gains have not yet been seen for real-robot learning. In this work, we focus on enabling data-efficient real-robot learning from pixels. We present a Framework for Efficient Robotic Manipulation (FERM), a method that utilizes data augmentation and unsupervised learning to achieve sample-efficient training of real-robot arm policies from sparse rewards. While contrastive pre-training, data augmentation, and demonstrations are alone insufficient for efficient learning, our main contribution is showing that the combination of these disparate techniques results in a simple yet data-efficient method. We show that, given only 10 demonstrations, a single robotic arm can learn sparse-reward manipulation policies from pixels, such as reaching, picking, moving, pulling a large object, flipping a switch, and opening a drawer in just 30 minutes of mean real-world training time.

Cite

Text

Zhan et al. "A Framework for Efficient Robotic Manipulation." NeurIPS 2021 Workshops: DeepRL, 2021.

Markdown

[Zhan et al. "A Framework for Efficient Robotic Manipulation." NeurIPS 2021 Workshops: DeepRL, 2021.](https://mlanthology.org/neuripsw/2021/zhan2021neuripsw-framework/)

BibTeX

@inproceedings{zhan2021neuripsw-framework,
  title     = {{A Framework for Efficient Robotic Manipulation}},
  author    = {Zhan, Albert and Zhao, Ruihan and Pinto, Lerrel and Abbeel, Pieter and Laskin, Michael},
  booktitle = {NeurIPS 2021 Workshops: DeepRL},
  year      = {2021},
  url       = {https://mlanthology.org/neuripsw/2021/zhan2021neuripsw-framework/}
}