Learning Hierarchical Poselets for Human Parsing
Abstract
We consider the problem of human parsing with part-based models. Most previous work in part-based models only considers rigid parts (e.g. torso, head, half limbs) guided by human anatomy. We argue that this representation of parts is not necessarily appropriate for human parsing. In this paper, we introduce hierarchical poselets-a new representation for human parsing. Hierarchical poselets can be rigid parts, but they can also be parts that cover large portions of human bodies (e.g. torso + left arm). In the extreme case, they can be the whole bodies. We develop a structured model to organize poselets in a hierarchical way and learn the model parameters in a max-margin framework. We demonstrate the superior performance of our proposed approach on two datasets with aggressive pose variations.
Cite
Text
Wang et al. "Learning Hierarchical Poselets for Human Parsing." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2011. doi:10.1109/CVPR.2011.5995519Markdown
[Wang et al. "Learning Hierarchical Poselets for Human Parsing." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2011.](https://mlanthology.org/cvpr/2011/wang2011cvpr-learning/) doi:10.1109/CVPR.2011.5995519BibTeX
@inproceedings{wang2011cvpr-learning,
title = {{Learning Hierarchical Poselets for Human Parsing}},
author = {Wang, Yang and Tran, Duan and Liao, Zicheng},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2011},
pages = {1705-1712},
doi = {10.1109/CVPR.2011.5995519},
url = {https://mlanthology.org/cvpr/2011/wang2011cvpr-learning/}
}