OVE6D: Object Viewpoint Encoding for Depth-Based 6d Object Pose Estimation

Abstract

This paper proposes a universal framework, called OVE6D, for model-based 6D object pose estimation from a single depth image and a target object mask. Our model is trained using purely synthetic data rendered from ShapeNet, and, unlike most of the existing methods, it generalizes well on new real-world objects without any fine-tuning. We achieve this by decomposing the 6D pose into viewpoint, in-plane rotation around the camera optical axis and translation, and introducing novel lightweight modules for estimating each component in a cascaded manner. The resulting network contains less than 4M parameters while demonstrating excellent performance on the challenging T-LESS and Occluded LINEMOD datasets without any dataset-specific training. We show that OVE6D outperforms some contemporary deep learning-based pose estimation methods specifically trained for individual objects or datasets with real-world training data.

Cite

Text

Cai et al. "OVE6D: Object Viewpoint Encoding for Depth-Based 6d Object Pose Estimation." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.00668

Markdown

[Cai et al. "OVE6D: Object Viewpoint Encoding for Depth-Based 6d Object Pose Estimation." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/cai2022cvpr-ove6d/) doi:10.1109/CVPR52688.2022.00668

BibTeX

@inproceedings{cai2022cvpr-ove6d,
  title     = {{OVE6D: Object Viewpoint Encoding for Depth-Based 6d Object Pose Estimation}},
  author    = {Cai, Dingding and Heikkilä, Janne and Rahtu, Esa},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {6803-6813},
  doi       = {10.1109/CVPR52688.2022.00668},
  url       = {https://mlanthology.org/cvpr/2022/cai2022cvpr-ove6d/}
}