Keypoint Communities
Abstract
We present a fast bottom-up method that jointly detects over 100 keypoints on humans or objects, also referred to as human/object pose estimation. We model all keypoints belonging to a human or an object --the pose-- as a graph and leverage insights from community detection to quantify the independence of keypoints. We use a graph centrality measure to assign training weights to different parts of a pose. Our proposed measure quantifies how tightly a keypoint is connected to its neighborhood. Our experiments show that our method outperforms all previous methods for human pose estimation with fine-grained keypoint annotations on the face, the hands and the feet with a total of 133 keypoints. We also show that our method generalizes to car poses.
Cite
Text
Zauss et al. "Keypoint Communities." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.01087Markdown
[Zauss et al. "Keypoint Communities." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/zauss2021iccv-keypoint/) doi:10.1109/ICCV48922.2021.01087BibTeX
@inproceedings{zauss2021iccv-keypoint,
title = {{Keypoint Communities}},
author = {Zauss, Duncan and Kreiss, Sven and Alahi, Alexandre},
booktitle = {International Conference on Computer Vision},
year = {2021},
pages = {11057-11066},
doi = {10.1109/ICCV48922.2021.01087},
url = {https://mlanthology.org/iccv/2021/zauss2021iccv-keypoint/}
}