Single-View RGBD-Based Reconstruction of Dynamic Human Geometry
Abstract
We present a method for reconstructing the geometry and appearance of indoor scenes containing dynamic human subjects using a single (optionally moving) RGBD sensor. We introduce a framework for building a representation of the articulated scene geometry as a set of piecewise rigid parts which are tracked and accumulated over time using moving voxel grids containing a signed distance representation. Data association of noisy depth measurements with body parts is achieved by online training of a prior shape model for the specific subject. A novel frame-to-frame model registration is introduced which combines iterative closest-point with additional correspondences from optical flow and prior pose constraints from noisy skeletal tracking data. We quantitatively evaluate the reconstruction and tracking performance of the approach using a synthetic animated scene. We demonstrate that the approach is capable of reconstructing mid-resolution surface models of people from low-resolution noisy data acquired from a consumer RGBD camera.
Cite
Text
Malleson et al. "Single-View RGBD-Based Reconstruction of Dynamic Human Geometry." IEEE/CVF International Conference on Computer Vision Workshops, 2013. doi:10.1109/ICCVW.2013.48Markdown
[Malleson et al. "Single-View RGBD-Based Reconstruction of Dynamic Human Geometry." IEEE/CVF International Conference on Computer Vision Workshops, 2013.](https://mlanthology.org/iccvw/2013/malleson2013iccvw-singleview/) doi:10.1109/ICCVW.2013.48BibTeX
@inproceedings{malleson2013iccvw-singleview,
title = {{Single-View RGBD-Based Reconstruction of Dynamic Human Geometry}},
author = {Malleson, Charles and Klaudiny, Martin and Hilton, Adrian and Guillemaut, Jean-Yves},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2013},
pages = {307-314},
doi = {10.1109/ICCVW.2013.48},
url = {https://mlanthology.org/iccvw/2013/malleson2013iccvw-singleview/}
}