Self-Supervised Category-Level 6d Object Pose Estimation with Deep Implicit Shape Representation

Abstract

Category-level 6D pose estimation can be better generalized to unseen objects in a category compared with instance-level 6D pose estimation. However, existing category-level 6D pose estimation methods usually require supervised training with a sufficient number of 6D pose annotations of objects which makes them difficult to be applied in real scenarios. To address this problem, we propose a self-supervised framework for category-level 6D pose estimation in this paper. We leverage DeepSDF as a 3D object representation and design several novel loss functions based on DeepSDF to help the self-supervised model predict unseen object poses without any 6D object pose labels and explicit 3D models in real scenarios. Experiments demonstrate that our method achieves comparable performance with the state-of-the-art fully supervised methods on the category-level NOCS benchmark.

Cite

Text

Peng et al. "Self-Supervised Category-Level 6d Object Pose Estimation with Deep Implicit Shape Representation." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I2.20104

Markdown

[Peng et al. "Self-Supervised Category-Level 6d Object Pose Estimation with Deep Implicit Shape Representation." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/peng2022aaai-self/) doi:10.1609/AAAI.V36I2.20104

BibTeX

@inproceedings{peng2022aaai-self,
  title     = {{Self-Supervised Category-Level 6d Object Pose Estimation with Deep Implicit Shape Representation}},
  author    = {Peng, Wanli and Yan, Jianhang and Wen, Hongtao and Sun, Yi},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {2082-2090},
  doi       = {10.1609/AAAI.V36I2.20104},
  url       = {https://mlanthology.org/aaai/2022/peng2022aaai-self/}
}