Structured Deep Generative Models for Sampling on Constraint Manifolds in Sequential Manipulation

Abstract

Sampling efficiently on constraint manifolds is a core problem in robotics. We propose Deep Generative Constraint Sampling (DGCS), which combines a deep generative model for sampling close to a constraint manifold with nonlinear constrained optimization to project to the constraint manifold. The generative model is conditioned on the problem instance, taking a scene image as input, and it is trained with a dataset of solutions and a novel analytic constraint term. To further improve the precision and diversity of samples, we extend the approach to exploit a factorization of the constrained problem. We evaluate our approach in two problems of robotic sequential manipulation in cluttered environments. Experimental results demonstrate that our deep generative model produces diverse and precise samples and outperforms heuristic warmstart initialization.

Cite

Text

Ortiz-Haro et al. "Structured Deep Generative Models for Sampling on Constraint Manifolds in Sequential Manipulation." Conference on Robot Learning, 2021.

Markdown

[Ortiz-Haro et al. "Structured Deep Generative Models for Sampling on Constraint Manifolds in Sequential Manipulation." Conference on Robot Learning, 2021.](https://mlanthology.org/corl/2021/ortizharo2021corl-structured/)

BibTeX

@inproceedings{ortizharo2021corl-structured,
  title     = {{Structured Deep Generative Models for Sampling on Constraint Manifolds in Sequential Manipulation}},
  author    = {Ortiz-Haro, Joaquim and Ha, Jung-Su and Driess, Danny and Toussaint, Marc},
  booktitle = {Conference on Robot Learning},
  year      = {2021},
  pages     = {213-223},
  volume    = {164},
  url       = {https://mlanthology.org/corl/2021/ortizharo2021corl-structured/}
}