Robotic Grasping of Novel Objects
Abstract
We consider the problem of grasping novel objects, specifically ones that are being seen for the first time through vision. We present a learning algorithm that neither requires, nor tries to build, a 3-d model of the object. Instead it predicts, directly as a function of the images, a point at which to grasp the object. Our algorithm is trained via supervised learning, using synthetic images for the training set. We demonstrate on a robotic manipulation platform that this approach successfully grasps a wide variety of objects, such as wine glasses, duct tape, markers, a translucent box, jugs, knife-cutters, cellphones, keys, screwdrivers, staplers, toothbrushes, a thick coil of wire, a strangely shaped power horn, and others, none of which were seen in the training set.
Cite
Text
Saxena et al. "Robotic Grasping of Novel Objects." Neural Information Processing Systems, 2006.Markdown
[Saxena et al. "Robotic Grasping of Novel Objects." Neural Information Processing Systems, 2006.](https://mlanthology.org/neurips/2006/saxena2006neurips-robotic/)BibTeX
@inproceedings{saxena2006neurips-robotic,
title = {{Robotic Grasping of Novel Objects}},
author = {Saxena, Ashutosh and Driemeyer, Justin and Kearns, Justin and Ng, Andrew Y.},
booktitle = {Neural Information Processing Systems},
year = {2006},
pages = {1209-1216},
url = {https://mlanthology.org/neurips/2006/saxena2006neurips-robotic/}
}