Unsupervised Co-Part Segmentation Through Assembly

Abstract

Co-part segmentation is an important problem in computer vision for its rich applications. We propose an unsupervised learning approach for co-part segmentation from images. For the training stage, we leverage motion information embedded in videos and explicitly extract latent representations to segment meaningful object parts. More importantly, we introduce a dual procedure of part-assembly to form a closed loop with part-segmentation, enabling an effective self-supervision. We demonstrate the effectiveness of our approach with a host of extensive experiments, ranging from human bodies, hands, quadruped, and robot arms. We show that our approach can achieve meaningful and compact part segmentation, outperforming state-of-the-art approaches on diverse benchmarks.

Cite

Text

Gao et al. "Unsupervised Co-Part Segmentation Through Assembly." International Conference on Machine Learning, 2021.

Markdown

[Gao et al. "Unsupervised Co-Part Segmentation Through Assembly." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/gao2021icml-unsupervised/)

BibTeX

@inproceedings{gao2021icml-unsupervised,
  title     = {{Unsupervised Co-Part Segmentation Through Assembly}},
  author    = {Gao, Qingzhe and Wang, Bin and Liu, Libin and Chen, Baoquan},
  booktitle = {International Conference on Machine Learning},
  year      = {2021},
  pages     = {3576-3586},
  volume    = {139},
  url       = {https://mlanthology.org/icml/2021/gao2021icml-unsupervised/}
}