UniGarmentManip: A Unified Framework for Category-Level Garment Manipulation via Dense Visual Correspondence
Abstract
Garment manipulation (e.g. unfolding folding and hanging clothes) is essential for future robots to accomplish home-assistant tasks while highly challenging due to the diversity of garment configurations geometries and deformations. Although able to manipulate similar shaped garments in a certain task previous works mostly have to design different policies for different tasks could not generalize to garments with diverse geometries and often rely heavily on human-annotated data. In this paper we leverage the property that garments in a certain category have similar structures and then learn the topological dense (point-level) visual correspondence among garments in the category level with different deformations in the self-supervised manner. The topological correspondence can be easily adapted to the functional correspondence to guide the manipulation policies for various downstream tasks within only one or few-shot demonstrations. Experiments over garments in 3 different categories on 3 representative tasks in diverse scenarios using one or two arms taking one or more steps inputting flat or messy garments demonstrate the effectiveness of our proposed method. Project page: https://warshallrho.github.io/unigarmentmanip.
Cite
Text
Wu et al. "UniGarmentManip: A Unified Framework for Category-Level Garment Manipulation via Dense Visual Correspondence." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.01546Markdown
[Wu et al. "UniGarmentManip: A Unified Framework for Category-Level Garment Manipulation via Dense Visual Correspondence." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/wu2024cvpr-unigarmentmanip/) doi:10.1109/CVPR52733.2024.01546BibTeX
@inproceedings{wu2024cvpr-unigarmentmanip,
title = {{UniGarmentManip: A Unified Framework for Category-Level Garment Manipulation via Dense Visual Correspondence}},
author = {Wu, Ruihai and Lu, Haoran and Wang, Yiyan and Wang, Yubo and Dong, Hao},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2024},
pages = {16340-16350},
doi = {10.1109/CVPR52733.2024.01546},
url = {https://mlanthology.org/cvpr/2024/wu2024cvpr-unigarmentmanip/}
}