A Monocular Pose Estimation Case Study: The Hayabusa2 Minerva-II2 Deployment

Abstract

In an environment of increasing orbital debris and remote operation, visual data acquisition methods are becoming a core competency of the next generation of spacecraft. However, deep space missions often generate limited data and noisy images, necessitating complex data analysis methods. Here, a state-of-the-art convolutional neural network (CNN) pose estimation pipeline is applied to the Hayabusa2 Minerva-II2 rover deployment; a challenging case with noisy images and a symmetric target. To enable training of this CNN, a custom dataset is created. The deployment velocity is estimated as 0.1908 m/s using a projective geometry approach and 0.1934 m/s using a CNN landmark detector approach, as compared to the official JAXA estimation of 0.1924 m/s (relative to the spacecraft). Additionally, the attitude estimation results from the real deployment images are shared and the associated tumble estimation is discussed.

Cite

Text

Price and Yoshida. "A Monocular Pose Estimation Case Study: The Hayabusa2 Minerva-II2 Deployment." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021. doi:10.1109/CVPRW53098.2021.00227

Markdown

[Price and Yoshida. "A Monocular Pose Estimation Case Study: The Hayabusa2 Minerva-II2 Deployment." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021.](https://mlanthology.org/cvprw/2021/price2021cvprw-monocular/) doi:10.1109/CVPRW53098.2021.00227

BibTeX

@inproceedings{price2021cvprw-monocular,
  title     = {{A Monocular Pose Estimation Case Study: The Hayabusa2 Minerva-II2 Deployment}},
  author    = {Price, Andrew and Yoshida, Kazuya},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2021},
  pages     = {1992-2001},
  doi       = {10.1109/CVPRW53098.2021.00227},
  url       = {https://mlanthology.org/cvprw/2021/price2021cvprw-monocular/}
}