GarmentTracking: Category-Level Garment Pose Tracking

Abstract

Garments are important to humans. A visual system that can estimate and track the complete garment pose can be useful for many downstream tasks and real-world applications. In this work, we present a complete package to address the category-level garment pose tracking task: (1) A recording system VR-Garment, with which users can manipulate virtual garment models in simulation through a VR interface. (2) A large-scale dataset VR-Folding, with complex garment pose configurations in manipulation like flattening and folding. (3) An end-to-end online tracking framework GarmentTracking, which predicts complete garment pose both in canonical space and task space given a point cloud sequence. Extensive experiments demonstrate that the proposed GarmentTracking achieves great performance even when the garment has large non-rigid deformation. It outperforms the baseline approach on both speed and accuracy. We hope our proposed solution can serve as a platform for future research. Codes and datasets are available in https://garment-tracking.robotflow.ai.

Cite

Text

Xue et al. "GarmentTracking: Category-Level Garment Pose Tracking." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.02034

Markdown

[Xue et al. "GarmentTracking: Category-Level Garment Pose Tracking." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/xue2023cvpr-garmenttracking/) doi:10.1109/CVPR52729.2023.02034

BibTeX

@inproceedings{xue2023cvpr-garmenttracking,
  title     = {{GarmentTracking: Category-Level Garment Pose Tracking}},
  author    = {Xue, Han and Xu, Wenqiang and Zhang, Jieyi and Tang, Tutian and Li, Yutong and Du, Wenxin and Ye, Ruolin and Lu, Cewu},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2023},
  pages     = {21233-21242},
  doi       = {10.1109/CVPR52729.2023.02034},
  url       = {https://mlanthology.org/cvpr/2023/xue2023cvpr-garmenttracking/}
}