Co-Segmentation of Textured 3D Shapes with Sparse Annotations

Abstract

We present a novel co-segmentation method for textured 3D shapes. Our algorithm takes a collection of textured shapes belonging to the same category and sparse annotations of foreground segments, and produces a joint dense segmentation of the shapes in the collection. We model the segments by a collectively trained Gaussian mixture model. The final model segmentation is formulated as an energy minimization across all models jointly, where intra-model edges control the smoothness and separation of model segments, and inter-model edges impart global consistency. We show promising results on two large real-world datasets, and also compare with previous shape-only 3D segmentation methods using publicly available datasets.

Cite

Text

Yumer et al. "Co-Segmentation of Textured 3D Shapes with Sparse Annotations." Conference on Computer Vision and Pattern Recognition, 2014. doi:10.1109/CVPR.2014.38

Markdown

[Yumer et al. "Co-Segmentation of Textured 3D Shapes with Sparse Annotations." Conference on Computer Vision and Pattern Recognition, 2014.](https://mlanthology.org/cvpr/2014/yumer2014cvpr-cosegmentation/) doi:10.1109/CVPR.2014.38

BibTeX

@inproceedings{yumer2014cvpr-cosegmentation,
  title     = {{Co-Segmentation of Textured 3D Shapes with Sparse Annotations}},
  author    = {Yumer, Mehmet Ersin and Chun, Won and Makadia, Ameesh},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2014},
  doi       = {10.1109/CVPR.2014.38},
  url       = {https://mlanthology.org/cvpr/2014/yumer2014cvpr-cosegmentation/}
}