Learning to Track 3D Human Motion from Silhouettes
Abstract
We describe a sparse Bayesian regression method for recovering 3D human bodymotion directly from silhouettes extracted from monocular video sequences. Nodetailed body shape model is needed, and realism is ensured by training onreal human motion capture data. The tracker estimates 3D body pose by usingRelevance Vector Machine regression to combine a learned autoregressivedynamical model with robust shape descriptors extracted automatically fromimage silhouettes. We studied several different combination methods, the mosteffective being to learn a nonlinear observation-update correction based onjoint regression with respect to the predicted state and the observations. Wedemonstrate the method on a 54-parameter full body pose model, bothquantitatively using motion capture based test sequences, and qualitatively ona test video sequence.
Cite
Text
Agarwal and Triggs. "Learning to Track 3D Human Motion from Silhouettes." International Conference on Machine Learning, 2004. doi:10.1145/1015330.1015343Markdown
[Agarwal and Triggs. "Learning to Track 3D Human Motion from Silhouettes." International Conference on Machine Learning, 2004.](https://mlanthology.org/icml/2004/agarwal2004icml-learning/) doi:10.1145/1015330.1015343BibTeX
@inproceedings{agarwal2004icml-learning,
title = {{Learning to Track 3D Human Motion from Silhouettes}},
author = {Agarwal, Ankur and Triggs, Bill},
booktitle = {International Conference on Machine Learning},
year = {2004},
doi = {10.1145/1015330.1015343},
url = {https://mlanthology.org/icml/2004/agarwal2004icml-learning/}
}