SNUG: Self-Supervised Neural Dynamic Garments

Abstract

We present a self-supervised method to learn dynamic 3D deformations of garments worn by parametric human bodies. State-of-the-art data-driven approaches to model 3D garment deformations are trained using supervised strategies that require large datasets, usually obtained by expensive physics-based simulation methods or professional multi-camera capture setups. In contrast, we propose a new training scheme that removes the need for ground-truth samples, enabling self-supervised training of dynamic 3D garment deformations. Our key contribution is to realize that physics-based deformation models, traditionally solved in a frame-by-frame basis by implicit integrators, can be recasted as an optimization problem. We leverage such optimization-based scheme to formulate a set of physics-based loss terms that can be used to train neural networks without precomputing ground-truth data. This allows us to learn models for interactive garments, including dynamic deformations and fine wrinkles, with two orders of magnitude speed up in training time compared to state-of-the-art supervised methods.

Cite

Text

Santesteban et al. "SNUG: Self-Supervised Neural Dynamic Garments." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.00797

Markdown

[Santesteban et al. "SNUG: Self-Supervised Neural Dynamic Garments." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/santesteban2022cvpr-snug/) doi:10.1109/CVPR52688.2022.00797

BibTeX

@inproceedings{santesteban2022cvpr-snug,
  title     = {{SNUG: Self-Supervised Neural Dynamic Garments}},
  author    = {Santesteban, Igor and Otaduy, Miguel A. and Casas, Dan},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {8140-8150},
  doi       = {10.1109/CVPR52688.2022.00797},
  url       = {https://mlanthology.org/cvpr/2022/santesteban2022cvpr-snug/}
}