Human Attributes from 3D Pose Tracking
Abstract
We show that, from the output of a simple 3D human pose tracker one can infer physical attributes ( e.g. , gender and weight) and aspects of mental state ( e.g. , happiness or sadness). This task is useful for man-machine communication, and it provides a natural benchmark for evaluating the performance of 3D pose tracking methods ( vs. conventional Euclidean joint error metrics). Based on an extensive corpus of motion capture data, with physical and perceptual ground truth, we analyze the inference of subtle biologically-inspired attributes from cyclic gait data. It is shown that inference is also possible with partial observations of the body, and with motions as short as a single gait cycle. Learning models from small amounts of noisy video pose data is, however, prone to over-fitting. To mitigate this we formulate learning in terms of domain adaptation, for which mocap data is uses to regularize models for inference from video-based data.
Cite
Text
Sigal et al. "Human Attributes from 3D Pose Tracking." European Conference on Computer Vision, 2010. doi:10.1007/978-3-642-15558-1_18Markdown
[Sigal et al. "Human Attributes from 3D Pose Tracking." European Conference on Computer Vision, 2010.](https://mlanthology.org/eccv/2010/sigal2010eccv-human/) doi:10.1007/978-3-642-15558-1_18BibTeX
@inproceedings{sigal2010eccv-human,
title = {{Human Attributes from 3D Pose Tracking}},
author = {Sigal, Leonid and Fleet, David J. and Troje, Nikolaus F. and Livne, Micha},
booktitle = {European Conference on Computer Vision},
year = {2010},
pages = {243-257},
doi = {10.1007/978-3-642-15558-1_18},
url = {https://mlanthology.org/eccv/2010/sigal2010eccv-human/}
}