CAMM: Building Category-Agnostic and Animatable 3D Models from Monocular Videos

Abstract

Animating an object in 3D often requires an articulated structure, e.g. a kinematic chain or skeleton of the manipulated object with proper skinning weights, to obtain smooth movements and surface deformations. However, existing models that allow direct pose manipulations are either limited to specific object categories or built with specialized equipment. To reduce the work needed for creating animatable 3D models, we propose a novel reconstruction method that learns an animatable kinematic chain for any articulated object. Our method operates on monocular videos without prior knowledge of the object’s shape or underlying structure. Our approach is on par with state-of-the-art 3D surface reconstruction methods on various articulated object categories while enabling direct pose manipulations by re-posing the learned kinematic chain. Our project page: https://camm3d.github.io/.

Cite

Text

Kuai et al. "CAMM: Building Category-Agnostic and Animatable 3D Models from Monocular Videos." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023. doi:10.1109/CVPRW59228.2023.00700

Markdown

[Kuai et al. "CAMM: Building Category-Agnostic and Animatable 3D Models from Monocular Videos." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023.](https://mlanthology.org/cvprw/2023/kuai2023cvprw-camm/) doi:10.1109/CVPRW59228.2023.00700

BibTeX

@inproceedings{kuai2023cvprw-camm,
  title     = {{CAMM: Building Category-Agnostic and Animatable 3D Models from Monocular Videos}},
  author    = {Kuai, Tianshu and Karthikeyan, Akash and Kant, Yash and Mirzaei, Ashkan and Gilitschenski, Igor},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2023},
  pages     = {6587-6597},
  doi       = {10.1109/CVPRW59228.2023.00700},
  url       = {https://mlanthology.org/cvprw/2023/kuai2023cvprw-camm/}
}