Pose-Guided Knowledge Transfer for Object Part Segmentation

Abstract

Object part segmentation is an important problem for many applications, but generating the annotations to train a part segmentation model is typically quite labor-intensive. Recently, Fang et al. [6] augmented object part segmentation datasets by using keypoint locations as weak supervision to transfer a source object instance’s part annotations to an unlabeled target object. We show that while their approach works well when the source and target objects have clearly visible keypoints, it often fails for severely articulated poses. Also, their model does not generalize well across multiple object classes, even if they are very similar. In this paper, we propose and evaluate a new model for transferring part segmentations using keypoints, even for complex object poses and across different object classes.

Cite

Text

Naha et al. "Pose-Guided Knowledge Transfer for Object Part Segmentation." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020. doi:10.1109/CVPRW50498.2020.00461

Markdown

[Naha et al. "Pose-Guided Knowledge Transfer for Object Part Segmentation." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020.](https://mlanthology.org/cvprw/2020/naha2020cvprw-poseguided/) doi:10.1109/CVPRW50498.2020.00461

BibTeX

@inproceedings{naha2020cvprw-poseguided,
  title     = {{Pose-Guided Knowledge Transfer for Object Part Segmentation}},
  author    = {Naha, Shujon and Xiao, Qingyang and Banik, Prianka and Reza, Md. Alimoor and Crandall, David J.},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2020},
  pages     = {3961-3955},
  doi       = {10.1109/CVPRW50498.2020.00461},
  url       = {https://mlanthology.org/cvprw/2020/naha2020cvprw-poseguided/}
}