Wide-Depth-Range 6d Object Pose Estimation in Space

Abstract

6D pose estimation in space poses unique challenges that are not commonly encountered in the terrestrial setting. One of the most striking differences is the lack of atmospheric scattering, allowing objects to be visible from a great distance while complicating illumination conditions. Currently available benchmark datasets do not place a sufficient emphasis on this aspect and mostly depict the target in close proximity. Prior work tackling pose estimation under large scale variations relies on a two-stage approach to first estimate scale, followed by pose estimation on a resized image patch. We instead propose a single-stage hierarchical end-to-end trainable network that is more robust to scale variations. We demonstrate that it outperforms existing approaches not only on images synthesized to resemble images taken in space but also on standard benchmarks.

Cite

Text

Hu et al. "Wide-Depth-Range 6d Object Pose Estimation in Space." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.01561

Markdown

[Hu et al. "Wide-Depth-Range 6d Object Pose Estimation in Space." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/hu2021cvpr-widedepthrange/) doi:10.1109/CVPR46437.2021.01561

BibTeX

@inproceedings{hu2021cvpr-widedepthrange,
  title     = {{Wide-Depth-Range 6d Object Pose Estimation in Space}},
  author    = {Hu, Yinlin and Speierer, Sebastien and Jakob, Wenzel and Fua, Pascal and Salzmann, Mathieu},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2021},
  pages     = {15870-15879},
  doi       = {10.1109/CVPR46437.2021.01561},
  url       = {https://mlanthology.org/cvpr/2021/hu2021cvpr-widedepthrange/}
}