Articulated Pose Estimation with Flexible Mixtures-of-Parts
Abstract
We describe a method for human pose estimation in static images based on a novel representation of part models. Notably, we do not use articulated limb parts, but rather capture orientation with a mixture of templates for each part. We describe a general, flexible mixture model for capturing contextual co-occurrence relations between parts, augmenting standard spring models that encode spatial relations. We show that such relations can capture notions of local rigidity. When co-occurrence and spatial relations are tree-structured, our model can be efficiently optimized with dynamic programming. We present experimental results on standard benchmarks for pose estimation that indicate our approach is the state-of-the-art system for pose estimation, outperforming past work by 50% while being orders of magnitude faster.
Cite
Text
Yang and Ramanan. "Articulated Pose Estimation with Flexible Mixtures-of-Parts." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2011. doi:10.1109/CVPR.2011.5995741Markdown
[Yang and Ramanan. "Articulated Pose Estimation with Flexible Mixtures-of-Parts." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2011.](https://mlanthology.org/cvpr/2011/yang2011cvpr-articulated/) doi:10.1109/CVPR.2011.5995741BibTeX
@inproceedings{yang2011cvpr-articulated,
title = {{Articulated Pose Estimation with Flexible Mixtures-of-Parts}},
author = {Yang, Yi and Ramanan, Deva},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2011},
pages = {1385-1392},
doi = {10.1109/CVPR.2011.5995741},
url = {https://mlanthology.org/cvpr/2011/yang2011cvpr-articulated/}
}