Grasping the Apparent Contour

Abstract

Two fingered grasps of a priori unknown 3D objects can be achieved effectively using active vision. Real-time contour tracking can be used to localise the silhouette of the object, viewed from a camera mounted with a gripper, on a moving robot arm. Geometric information from analysis of motion around one vantage point is used to guide the robot towards a new vantage point from which the rim (inverse image of the silhouette) admits a more stable grasp. This use of deliberate camera motion to compute the best direction for the robot's subsequent motion, is computationally efficient. Visual processing is concentrated around potential grasp points and costly global reconstruction of an entire surface is avoided. The computation is shown to be robust, both theoretically, owing to a connection with visual parallax, and in computational experiments.

Cite

Text

Taylor and Blake. "Grasping the Apparent Contour." European Conference on Computer Vision, 1994. doi:10.1007/BFB0028332

Markdown

[Taylor and Blake. "Grasping the Apparent Contour." European Conference on Computer Vision, 1994.](https://mlanthology.org/eccv/1994/taylor1994eccv-grasping/) doi:10.1007/BFB0028332

BibTeX

@inproceedings{taylor1994eccv-grasping,
  title     = {{Grasping the Apparent Contour}},
  author    = {Taylor, Michael J. and Blake, Andrew},
  booktitle = {European Conference on Computer Vision},
  year      = {1994},
  pages     = {25-34},
  doi       = {10.1007/BFB0028332},
  url       = {https://mlanthology.org/eccv/1994/taylor1994eccv-grasping/}
}