Reconstructing Articulated Rigged Models from RGB-D Videos

Abstract

Although commercial and open-source software exist to reconstruct a static object from a sequence recorded with an RGB-D sensor, there is a lack of tools that build rigged models of articulated objects that deform realistically and can be used for tracking or animation. In this work, we fill this gap and propose a method that creates a fully rigged model of an articulated object from depth data of a single sensor. To this end, we combine deformable mesh tracking, motion segmentation based on spectral clustering and skeletonization based on mean curvature flow. The fully rigged model then consists of a watertight mesh, embedded skeleton, and skinning weights.

Cite

Text

Tzionas and Gall. "Reconstructing Articulated Rigged Models from RGB-D Videos." European Conference on Computer Vision Workshops, 2016. doi:10.1007/978-3-319-49409-8_53

Markdown

[Tzionas and Gall. "Reconstructing Articulated Rigged Models from RGB-D Videos." European Conference on Computer Vision Workshops, 2016.](https://mlanthology.org/eccvw/2016/tzionas2016eccvw-reconstructing/) doi:10.1007/978-3-319-49409-8_53

BibTeX

@inproceedings{tzionas2016eccvw-reconstructing,
  title     = {{Reconstructing Articulated Rigged Models from RGB-D Videos}},
  author    = {Tzionas, Dimitrios and Gall, Juergen},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2016},
  pages     = {620-633},
  doi       = {10.1007/978-3-319-49409-8_53},
  url       = {https://mlanthology.org/eccvw/2016/tzionas2016eccvw-reconstructing/}
}