Marker-Less Deformable Mesh Tracking for Human Shape and Motion Capture

Abstract

We present a novel algorithm to jointly capture the motion and the dynamic \nshape of humans from \nmultiple video streams without using optical markers. Instead of relying on \nkinematic skeletons, \nas traditional motion capture methods, our approach uses a deformable \nhigh-quality mesh of a human \nas scene representation. It jointly uses an image-based \n\\mbox{3D} correspondence estimation algorithm and a fast \nLaplacian mesh deformation scheme to capture both \nmotion and surface deformation \nof the actor from the input video footage. As opposed to many related methods, \nour algorithm can track people wearing wide apparel, it can straightforwardly \nbe applied to \nany type of subject, e.g. animals, and it preserves the connectivity \nof the mesh over time. We demonstrate the performance of our approach using \nsynthetic and \ncaptured real-world video sequences and validate its accuracy by comparison to \nthe ground truth.

Cite

Text

de Aguiar et al. "Marker-Less Deformable Mesh Tracking for Human Shape and Motion Capture." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2007. doi:10.1109/CVPR.2007.383296

Markdown

[de Aguiar et al. "Marker-Less Deformable Mesh Tracking for Human Shape and Motion Capture." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2007.](https://mlanthology.org/cvpr/2007/deaguiar2007cvpr-marker/) doi:10.1109/CVPR.2007.383296

BibTeX

@inproceedings{deaguiar2007cvpr-marker,
  title     = {{Marker-Less Deformable Mesh Tracking for Human Shape and Motion Capture}},
  author    = {de Aguiar, Edilson and Theobalt, Christian and Stoll, Carsten and Seidel, Hans-Peter},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year      = {2007},
  doi       = {10.1109/CVPR.2007.383296},
  url       = {https://mlanthology.org/cvpr/2007/deaguiar2007cvpr-marker/}
}