Articulated Pose Estimation Using Discriminative Armlet Classifiers

Abstract

We propose a novel approach for human pose estimation in real-world cluttered scenes, and focus on the challenging problem of predicting the pose of both arms for each person in the image. For this purpose, we build on the notion of poselets [4] and train highly discriminative classifiers to differentiate among arm configurations, which we call armlets. We propose a rich representation which, in addition to standard HOG features, integrates the information of strong contours, skin color and contextual cues in a principled manner. Unlike existing methods, we evaluate our approach on a large subset of images from the PASCAL VOC detection dataset, where critical visual phenomena, such as occlusion, truncation, multiple instances and clutter are the norm. Our approach outperforms Yang and Ramanan [26], the state-of-the-art technique, with an improvement from 29.0% to 37.5% PCP accuracy on the arm keypoint prediction task, on this new pose estimation dataset.

Cite

Text

Gkioxari et al. "Articulated Pose Estimation Using Discriminative Armlet Classifiers." Conference on Computer Vision and Pattern Recognition, 2013. doi:10.1109/CVPR.2013.429

Markdown

[Gkioxari et al. "Articulated Pose Estimation Using Discriminative Armlet Classifiers." Conference on Computer Vision and Pattern Recognition, 2013.](https://mlanthology.org/cvpr/2013/gkioxari2013cvpr-articulated/) doi:10.1109/CVPR.2013.429

BibTeX

@inproceedings{gkioxari2013cvpr-articulated,
  title     = {{Articulated Pose Estimation Using Discriminative Armlet Classifiers}},
  author    = {Gkioxari, Georgia and Arbelaez, Pablo and Bourdev, Lubomir and Malik, Jitendra},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2013},
  doi       = {10.1109/CVPR.2013.429},
  url       = {https://mlanthology.org/cvpr/2013/gkioxari2013cvpr-articulated/}
}