VolumeDeform: Real-Time Volumetric Non-Rigid Reconstruction
Abstract
We present a novel approach for the reconstruction of dynamic geometric shapes using a single hand-held consumer-grade RGB-D sensor at real-time rates. Our method builds up the scene model from scratch during the scanning process, thus it does not require a pre-defined shape template to start with. Geometry and motion are parameterized in a unified manner by a volumetric representation that encodes a distance field of the surface geometry as well as the non-rigid space deformation. Motion tracking is based on a set of extracted sparse color features in combination with a dense depth constraint. This enables accurate tracking and drastically reduces drift inherent to standard model-to-depth alignment. We cast finding the optimal deformation of space as a non-linear regularized variational optimization problem by enforcing local smoothness and proximity to the input constraints. The problem is tackled in real-time at the camera’s capture rate using a data-parallel flip-flop optimization strategy. Our results demonstrate robust tracking even for fast motion and scenes that lack geometric features.
Cite
Text
Innmann et al. "VolumeDeform: Real-Time Volumetric Non-Rigid Reconstruction." European Conference on Computer Vision, 2016. doi:10.1007/978-3-319-46484-8_22Markdown
[Innmann et al. "VolumeDeform: Real-Time Volumetric Non-Rigid Reconstruction." European Conference on Computer Vision, 2016.](https://mlanthology.org/eccv/2016/innmann2016eccv-volumedeform/) doi:10.1007/978-3-319-46484-8_22BibTeX
@inproceedings{innmann2016eccv-volumedeform,
title = {{VolumeDeform: Real-Time Volumetric Non-Rigid Reconstruction}},
author = {Innmann, Matthias and Zollhöfer, Michael and Nießner, Matthias and Theobalt, Christian and Stamminger, Marc},
booktitle = {European Conference on Computer Vision},
year = {2016},
pages = {362-379},
doi = {10.1007/978-3-319-46484-8_22},
url = {https://mlanthology.org/eccv/2016/innmann2016eccv-volumedeform/}
}