Inferring Geometric Constraints in Human Demonstrations

Abstract

This paper presents an approach for inferring geometric constraints in human demonstrations. In our method, geometric constraint models are built to create representations of kinematic constraints such as fixed point, axial rotation, prismatic motion, planar motion and others across multiple degrees of freedom. Our method infers geometric constraints using both kinematic and force/torque information. The approach first fits all the constraint models using kinematic information and evaluates them individually using position, force and moment criteria. Our approach does not require information about the constraint type or contact geometry; it can determine both simultaneously. We present experimental evaluations using instrumented tongs that show how constraints can be robustly inferred in recordings of human demonstrations.

Cite

Text

Subramani et al. "Inferring Geometric Constraints in Human Demonstrations." Conference on Robot Learning, 2018.

Markdown

[Subramani et al. "Inferring Geometric Constraints in Human Demonstrations." Conference on Robot Learning, 2018.](https://mlanthology.org/corl/2018/subramani2018corl-inferring/)

BibTeX

@inproceedings{subramani2018corl-inferring,
  title     = {{Inferring Geometric Constraints in Human Demonstrations}},
  author    = {Subramani, Guru and Zinn, Michael R. and Gleicher, Michael},
  booktitle = {Conference on Robot Learning},
  year      = {2018},
  pages     = {223-236},
  url       = {https://mlanthology.org/corl/2018/subramani2018corl-inferring/}
}