Articulated Motion Modeling for Activity Analysis
Abstract
We propose an algorithm for articulated human motion segmentation that estimates parametric motions of body parts and segments images into moving regions accordingly. Our approach combines robust optical flow estimation, RANSAC, and region segmentation using color and Gaussian shape priors. This combination results in an algorithm that can robustly estimate and segment multiple motions, even for moving regions with small support and in low-resolution images. Based on the raw motion segmentation, consistent body motions are detected over time to characterize human activity. The effectiveness of this approach is demonstrated in a real scenario: characterizing dining activities of patients at a nursing home.
Cite
Text
Gao et al. "Articulated Motion Modeling for Activity Analysis." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2004. doi:10.1109/CVPR.2004.303Markdown
[Gao et al. "Articulated Motion Modeling for Activity Analysis." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2004.](https://mlanthology.org/cvpr/2004/gao2004cvpr-articulated/) doi:10.1109/CVPR.2004.303BibTeX
@inproceedings{gao2004cvpr-articulated,
title = {{Articulated Motion Modeling for Activity Analysis}},
author = {Gao, Jiang and Collins, Robert T. and Hauptmann, Alexander G. and Wactlar, Howard D.},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2004},
pages = {20},
doi = {10.1109/CVPR.2004.303},
url = {https://mlanthology.org/cvpr/2004/gao2004cvpr-articulated/}
}