Non-Rigid Relative Placement Through 3D Dense Diffusion
Abstract
The task of “relative placement” is to predict the placement of one object in relation to another, e.g. placing a mug on a mug rack. Recent methods for relative placement have made tremendous progress towards data-efficient learning for robot manipulation; using explicit object-centric geometric reasoning, these approaches enable generalization to unseen task variations from a small number of demonstrations. State-of-the-art works in this area, however, have yet to represent deformable transformations, despite the ubiquity of non-rigid bodies in real world settings. As a first step towards bridging this gap, we propose “cross-displacement” - an extension of the principles of relative placement to geometric relationships between deformable objects - and present a novel vision-based method to learn cross-displacement for a non-rigid task through dense diffusion. To this end, we demonstrate our method’s ability to generalize to unseen object instances, out-of-distribution scene configurations, and multimodal goals on a highly deformable cloth-hanging task beyond the scope of prior works.
Cite
Text
Cai et al. "Non-Rigid Relative Placement Through 3D Dense Diffusion." Proceedings of The 8th Conference on Robot Learning, 2024.Markdown
[Cai et al. "Non-Rigid Relative Placement Through 3D Dense Diffusion." Proceedings of The 8th Conference on Robot Learning, 2024.](https://mlanthology.org/corl/2024/cai2024corl-nonrigid/)BibTeX
@inproceedings{cai2024corl-nonrigid,
title = {{Non-Rigid Relative Placement Through 3D Dense Diffusion}},
author = {Cai, Eric and Donca, Octavian and Eisner, Ben and Held, David},
booktitle = {Proceedings of The 8th Conference on Robot Learning},
year = {2024},
pages = {1268-1289},
volume = {270},
url = {https://mlanthology.org/corl/2024/cai2024corl-nonrigid/}
}