Bayesian Non-Parametric Inference for Manifold Based MoCap Representation
Abstract
We propose a novel approach to human action recognition, with motion capture data (MoCap), based on grouping sub-body parts. By representing configurations of actions as manifolds, joint positions are mapped on a subspace via principal geodesic analysis. The reduced space is still highly informative and allows for classification based on a non-parametric Bayesian approach, generating behaviors for each sub-body part. Having partitioned the set of joints, poses relative to a sub-body part are exchangeable, given a specified prior and can elicit, in principle, infinite behaviors. The generation of these behaviors is specified by a Dirichlet process mixture. We show with several experiments that the recognition gives very promising results, outperforming methods requiring temporal alignment.
Cite
Text
Natola et al. "Bayesian Non-Parametric Inference for Manifold Based MoCap Representation." International Conference on Computer Vision, 2015. doi:10.1109/ICCV.2015.523Markdown
[Natola et al. "Bayesian Non-Parametric Inference for Manifold Based MoCap Representation." International Conference on Computer Vision, 2015.](https://mlanthology.org/iccv/2015/natola2015iccv-bayesian/) doi:10.1109/ICCV.2015.523BibTeX
@inproceedings{natola2015iccv-bayesian,
title = {{Bayesian Non-Parametric Inference for Manifold Based MoCap Representation}},
author = {Natola, Fabrizio and Ntouskos, Valsamis and Sanzari, Marta and Pirri, Fiora},
booktitle = {International Conference on Computer Vision},
year = {2015},
doi = {10.1109/ICCV.2015.523},
url = {https://mlanthology.org/iccv/2015/natola2015iccv-bayesian/}
}