Neural Implicit Representation for Building Digital Twins of Unknown Articulated Objects

Abstract

We address the problem of building digital twins of unknown articulated objects from two RGBD scans of the object at different articulation states. We decompose the problem into two stages each addressing distinct aspects. Our method first reconstructs object-level shape at each state then recovers the underlying articulation model including part segmentation and joint articulations that associate the two states. By explicitly modeling point-level correspondences and exploiting cues from images 3D reconstructions and kinematics our method yields more accurate and stable results compared to prior work. It also handles more than one movable part and does not rely on any object shape or structure priors. Project page: https://github.com/NVlabs/DigitalTwinArt

Cite

Text

Weng et al. "Neural Implicit Representation for Building Digital Twins of Unknown Articulated Objects." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.00303

Markdown

[Weng et al. "Neural Implicit Representation for Building Digital Twins of Unknown Articulated Objects." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/weng2024cvpr-neural/) doi:10.1109/CVPR52733.2024.00303

BibTeX

@inproceedings{weng2024cvpr-neural,
  title     = {{Neural Implicit Representation for Building Digital Twins of Unknown Articulated Objects}},
  author    = {Weng, Yijia and Wen, Bowen and Tremblay, Jonathan and Blukis, Valts and Fox, Dieter and Guibas, Leonidas and Birchfield, Stan},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {3141-3150},
  doi       = {10.1109/CVPR52733.2024.00303},
  url       = {https://mlanthology.org/cvpr/2024/weng2024cvpr-neural/}
}