Performance Capture of Interacting Characters with Handheld Kinects
Abstract
We present an algorithm for marker-less performance capture of interacting humans using only three hand-held Kinect cameras. Our method reconstructs human skeletal poses, deforming surface geometry and camera poses for every time step of the depth video. Skeletal configurations and camera poses are found by solving a joint energy minimization problem which optimizes the alignment of RGBZ data from all cameras, as well as the alignment of human shape templates to the Kinect data. The energy function is based on a combination of geometric correspondence finding, implicit scene segmentation, and correspondence finding using image features. Only the combination of geometric and photometric correspondences and the integration of human pose and camera pose estimation enables reliable performance capture with only three sensors. As opposed to previous performance capture methods, our algorithm succeeds on general uncontrolled indoor scenes with potentially dynamic background, and it succeeds even if the cameras are moving.
Cite
Text
Ye et al. "Performance Capture of Interacting Characters with Handheld Kinects." European Conference on Computer Vision, 2012. doi:10.1007/978-3-642-33709-3_59Markdown
[Ye et al. "Performance Capture of Interacting Characters with Handheld Kinects." European Conference on Computer Vision, 2012.](https://mlanthology.org/eccv/2012/ye2012eccv-performance/) doi:10.1007/978-3-642-33709-3_59BibTeX
@inproceedings{ye2012eccv-performance,
title = {{Performance Capture of Interacting Characters with Handheld Kinects}},
author = {Ye, Genzhi and Liu, Yebin and Hasler, Nils and Ji, Xiangyang and Dai, Qionghai and Theobalt, Christian},
booktitle = {European Conference on Computer Vision},
year = {2012},
pages = {828-841},
doi = {10.1007/978-3-642-33709-3_59},
url = {https://mlanthology.org/eccv/2012/ye2012eccv-performance/}
}