NOPE: Novel Object Pose Estimation from a Single Image

Abstract

The practicality of 3D object pose estimation remains limited for many applications due to the need for prior knowledge of a 3D model and a training period for new objects. To address this limitation we propose an approach that takes a single image of a new object as input and predicts the relative pose of this object in new images without prior knowledge of the object's 3D model and without requiring training time for new objects and categories. We achieve this by training a model to directly predict discriminative embeddings for viewpoints surrounding the object. This prediction is done using a simple U-Net architecture with attention and conditioned on the desired pose which yields extremely fast inference. We compare our approach to state-of-the-art methods and show it outperforms them both in terms of accuracy and robustness.

Cite

Text

Nguyen et al. "NOPE: Novel Object Pose Estimation from a Single Image." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.01697

Markdown

[Nguyen et al. "NOPE: Novel Object Pose Estimation from a Single Image." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/nguyen2024cvpr-nope/) doi:10.1109/CVPR52733.2024.01697

BibTeX

@inproceedings{nguyen2024cvpr-nope,
  title     = {{NOPE: Novel Object Pose Estimation from a Single Image}},
  author    = {Nguyen, Van Nguyen and Groueix, Thibault and Ponimatkin, Georgy and Hu, Yinlin and Marlet, Renaud and Salzmann, Mathieu and Lepetit, Vincent},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {17923-17932},
  doi       = {10.1109/CVPR52733.2024.01697},
  url       = {https://mlanthology.org/cvpr/2024/nguyen2024cvpr-nope/}
}