Masking by Moving: Learning Distraction-Free Radar Odometry from Pose Information

Abstract

This paper presents an end-to-end radar odometry system which delivers robust, real-time pose estimates based on a learned embedding space free of sensing artefacts and distractor objects. The system deploys a fully differentiable, correlation-based radar matching approach. This provides the same level of interpretability as established scan-matching methods and allows for a principled derivation of uncertainty estimates. The system is trained in a (self-)supervised way using only previously obtained pose information as a training signal. Using 280km of urban driving data, we demonstrate that our approach outperforms the previous state-of-the-art in radar odometry by reducing errors by up 68% whilst running an order of magnitude faster.

Cite

Text

Barnes et al. "Masking by Moving: Learning Distraction-Free Radar Odometry from Pose Information." Conference on Robot Learning, 2019.

Markdown

[Barnes et al. "Masking by Moving: Learning Distraction-Free Radar Odometry from Pose Information." Conference on Robot Learning, 2019.](https://mlanthology.org/corl/2019/barnes2019corl-masking/)

BibTeX

@inproceedings{barnes2019corl-masking,
  title     = {{Masking by Moving: Learning Distraction-Free Radar Odometry from Pose Information}},
  author    = {Barnes, Dan and Weston, Rob and Posner, Ingmar},
  booktitle = {Conference on Robot Learning},
  year      = {2019},
  pages     = {303-316},
  volume    = {100},
  url       = {https://mlanthology.org/corl/2019/barnes2019corl-masking/}
}