PartImageNet: A Large, High-Quality Dataset of Parts

Abstract

It is natural to represent objects in terms of their parts. This has the potential to improve the performance of algorithms for object recognition and segmentation but can also help for downstream tasks like activity recognition. Research on part-based models, however, is hindered by the lack of datasets with per-pixel part annotations. This is partly due to the difficulty and high cost of annotating object parts so it has rarely been done except for humans (where there exists a big literature on part-based models). To help address this problem, we propose PartImageNet, a large, high-quality dataset with part segmentation annotations. It consists of $158$ classes from ImageNet with approximately 24,000 images. PartImageNet is unique because it offers part-level annotations on a general set of classes including non-rigid, articulated objects, while having an order of magnitude larger size compared to existing part datasets (excluding datasets of humans). It can be utilized for many vision tasks including Object Segmentation, Semantic Part Segmentation, Few-shot Learning and Part Discovery. We conduct comprehensive experiments which study these tasks and set up a set of baselines.

Cite

Text

He et al. "PartImageNet: A Large, High-Quality Dataset of Parts." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-20074-8_8

Markdown

[He et al. "PartImageNet: A Large, High-Quality Dataset of Parts." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/he2022eccv-partimagenet/) doi:10.1007/978-3-031-20074-8_8

BibTeX

@inproceedings{he2022eccv-partimagenet,
  title     = {{PartImageNet: A Large, High-Quality Dataset of Parts}},
  author    = {He, Ju and Yang, Shuo and Yang, Shaokang and Kortylewski, Adam and Yuan, Xiaoding and Chen, Jie-Neng and Liu, Shuai and Yang, Cheng and Yu, Qihang and Yuille, Alan},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2022},
  doi       = {10.1007/978-3-031-20074-8_8},
  url       = {https://mlanthology.org/eccv/2022/he2022eccv-partimagenet/}
}