Learning Models as Functionals of Signed-Distance Fields for Manipulation Planning

Abstract

This work proposes an optimization-based manipulation planning framework where the objectives are learned functionals of signed-distance fields that represent objects in the scene. Most manipulation planning approaches rely on analytical models and carefully chosen abstractions/state-spaces to be effective. A central question is how models can be obtained from data that are not primarily accurate in their predictions, but, more importantly, enable efficient reasoning within a planning framework, while at the same time being closely coupled to perception spaces. We show that representing objects as signed-distance fields not only enables to learn and represent a variety of models with higher accuracy compared to point-cloud and occupancy measure representations, but also that SDF-based models are suitable for optimization-based planning. To demonstrate the versatility of our approach, we learn both kinematic and dynamic models to solve tasks that involve hanging mugs on hooks and pushing objects on a table. We can unify these quite different tasks within one framework, since SDFs are the common object representation. Video: https://youtu.be/ga8Wlkss7co

Cite

Text

Driess et al. "Learning Models as Functionals of Signed-Distance Fields for Manipulation Planning." Conference on Robot Learning, 2021.

Markdown

[Driess et al. "Learning Models as Functionals of Signed-Distance Fields for Manipulation Planning." Conference on Robot Learning, 2021.](https://mlanthology.org/corl/2021/driess2021corl-learning/)

BibTeX

@inproceedings{driess2021corl-learning,
  title     = {{Learning Models as Functionals of Signed-Distance Fields for Manipulation Planning}},
  author    = {Driess, Danny and Ha, Jung-Su and Toussaint, Marc and Tedrake, Russ},
  booktitle = {Conference on Robot Learning},
  year      = {2021},
  pages     = {245-255},
  volume    = {164},
  url       = {https://mlanthology.org/corl/2021/driess2021corl-learning/}
}