Learning to Group: A Bottom-up Framework for 3D Part Discovery in Unseen Categories
Abstract
We address the problem of learning to discover 3D parts for objects in unseen categories. Being able to learn the geometry prior of parts and transfer this prior to unseen categories pose fundamental challenges on data-driven shape segmentation approaches. Formulated as a contextual bandit problem, we propose a learning-based iterative grouping framework which learns a grouping policy to progressively merge small part proposals into bigger ones in a bottom-up fashion. At the core of our approach is to restrict the local context for extracting part-level features, which encourages the generalizability to novel categories. On a recently proposed large-scale fine-grained 3D part dataset, PartNet, we demonstrate that our method can transfer knowledge of parts learned from 3 training categories to 21 unseen testing categories without seeing any annotated samples. Quantitative comparisons against four strong shape segmentation baselines show that we achieve the state-of-the-art performance.
Cite
Text
Luo et al. "Learning to Group: A Bottom-up Framework for 3D Part Discovery in Unseen Categories." International Conference on Learning Representations, 2020.Markdown
[Luo et al. "Learning to Group: A Bottom-up Framework for 3D Part Discovery in Unseen Categories." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/luo2020iclr-learning-a/)BibTeX
@inproceedings{luo2020iclr-learning-a,
title = {{Learning to Group: A Bottom-up Framework for 3D Part Discovery in Unseen Categories}},
author = {Luo, Tiange and Mo, Kaichun and Huang, Zhiao and Xu, Jiarui and Hu, Siyu and Wang, Liwei and Su, Hao},
booktitle = {International Conference on Learning Representations},
year = {2020},
url = {https://mlanthology.org/iclr/2020/luo2020iclr-learning-a/}
}