End-to-End Neural Estimation of Spacecraft Pose with Intermediate Detection of Keypoints
Abstract
State-of-the-art methods for estimating the pose of spacecrafts in Earth-orbit images rely on a convolutional neural network either to directly regress the spacecraft’s 6D pose parameters, or to localize pre-defined keypoints that are then used to compute pose through a Perspective-n-Point solver. We study an alternative solution that uses a convolutional network to predict keypoint locations, which are in turn used by a second network to infer the spacecraft’s 6D pose. This formulation retains the performance advantages of keypoint-based methods, while affording end-to-end training and faster processing. Our paper is the first to evaluate the applicability of such a method to the space domain. On the SPEED dataset, our approach achieves a mean rotation error of $4.69^\circ $ 4 . 69 ∘ and a mean translation error of $1.59\%$ 1.59 % with a throughput of 31 fps. We show that computational complexity can be reduced at the cost of a minor loss in accuracy.
Cite
Text
Legrand et al. "End-to-End Neural Estimation of Spacecraft Pose with Intermediate Detection of Keypoints." European Conference on Computer Vision Workshops, 2022. doi:10.1007/978-3-031-25056-9_11Markdown
[Legrand et al. "End-to-End Neural Estimation of Spacecraft Pose with Intermediate Detection of Keypoints." European Conference on Computer Vision Workshops, 2022.](https://mlanthology.org/eccvw/2022/legrand2022eccvw-endtoend/) doi:10.1007/978-3-031-25056-9_11BibTeX
@inproceedings{legrand2022eccvw-endtoend,
title = {{End-to-End Neural Estimation of Spacecraft Pose with Intermediate Detection of Keypoints}},
author = {Legrand, Antoine and Detry, Renaud and De Vleeschouwer, Christophe},
booktitle = {European Conference on Computer Vision Workshops},
year = {2022},
pages = {154-169},
doi = {10.1007/978-3-031-25056-9_11},
url = {https://mlanthology.org/eccvw/2022/legrand2022eccvw-endtoend/}
}