Learning Human Motion Models from Unsegmented Videos
Abstract
We present a novel method for learning human motion models from unsegmented videos. We propose a unified framework that encodes spatio-temporal relationships between descriptive motion parts and the appearance of individual poses. Sparse sets of spatial and spatio-temporal features are used. The method automatically learns static pose models and spatio-temporal motion parts. Neither motion cycles nor human figures need to be segmented for learning. We test the model on a publicly available action dataset and demonstrate that our new method performs well on a number of classification tasks. We also show that classification rates are improved by increasing the number of pose models in the framework.
Cite
Text
Filipovych and Ribeiro. "Learning Human Motion Models from Unsegmented Videos." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2008. doi:10.1109/CVPR.2008.4587724Markdown
[Filipovych and Ribeiro. "Learning Human Motion Models from Unsegmented Videos." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2008.](https://mlanthology.org/cvpr/2008/filipovych2008cvpr-learning/) doi:10.1109/CVPR.2008.4587724BibTeX
@inproceedings{filipovych2008cvpr-learning,
title = {{Learning Human Motion Models from Unsegmented Videos}},
author = {Filipovych, Roman and Ribeiro, Eraldo},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2008},
doi = {10.1109/CVPR.2008.4587724},
url = {https://mlanthology.org/cvpr/2008/filipovych2008cvpr-learning/}
}