Learning to Regrasp by Learning to Place
Abstract
In this paper, we explore whether a robot can learn to regrasp a diverse set of objects to achieve various desired grasp poses. Regrasping is needed whenever a robot’s current grasp pose fails to perform desired manipulation tasks. Endowing robots with such an ability has applications in many domains such as manufacturing or domestic services. Yet, it is a challenging task due to the large diversity of geometry in everyday objects and the high dimensionality of the state and action space. In this paper, we propose a system for robots to take partial point clouds of an object and the supporting environment as inputs and output a sequence of pick-and-place operations to transform an initial object grasp pose to the desired object grasp poses. The key technique includes a neural stable placement predictor and a regrasp graph based solution through leveraging and changing the surrounding environment. We introduce a new and challenging synthetic dataset for learning and evaluating the proposed approach. We demonstrate the effectiveness of our proposed system with both simulator and real-world experiments. More videos and visualization examples are available on our project https://sites.google.com/view/regrasp.
Cite
Text
Cheng et al. "Learning to Regrasp by Learning to Place." Conference on Robot Learning, 2021.Markdown
[Cheng et al. "Learning to Regrasp by Learning to Place." Conference on Robot Learning, 2021.](https://mlanthology.org/corl/2021/cheng2021corl-learning/)BibTeX
@inproceedings{cheng2021corl-learning,
title = {{Learning to Regrasp by Learning to Place}},
author = {Cheng, Shuo and Mo, Kaichun and Shao, Lin},
booktitle = {Conference on Robot Learning},
year = {2021},
pages = {277-286},
volume = {164},
url = {https://mlanthology.org/corl/2021/cheng2021corl-learning/}
}