Differentiable SLAM-Net: Learning Particle SLAM for Visual Navigation
Abstract
Simultaneous localization and mapping (SLAM) remains challenging for a number of downstream applications, such as visual robot navigation, because of rapid turns, featureless walls, and poor camera quality. We introduce the Differentiable SLAM Network (SLAM-net) along with a navigation architecture to enable planar robot navigation in previously unseen indoor environments. SLAM-net encodes a particle filter based SLAM algorithm in a differentiable computation graph, and learns task-oriented neural network components by backpropagating through the SLAM algorithm. Because it can optimize all model components jointly for the end-objective, SLAM-net learns to be robust in challenging conditions. We run experiments in the Habitat platform with different real-world RGB and RGB-D datasets. SLAM-net significantly outperforms the widely adapted ORB-SLAM in noisy conditions. Our navigation architecture with SLAM-net improves the state-of-the-art for the Habitat Challenge 2020 PointNav task by a large margin (37% to 64% success).
Cite
Text
Karkus et al. "Differentiable SLAM-Net: Learning Particle SLAM for Visual Navigation." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.00284Markdown
[Karkus et al. "Differentiable SLAM-Net: Learning Particle SLAM for Visual Navigation." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/karkus2021cvpr-differentiable/) doi:10.1109/CVPR46437.2021.00284BibTeX
@inproceedings{karkus2021cvpr-differentiable,
title = {{Differentiable SLAM-Net: Learning Particle SLAM for Visual Navigation}},
author = {Karkus, Peter and Cai, Shaojun and Hsu, David},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2021},
pages = {2815-2825},
doi = {10.1109/CVPR46437.2021.00284},
url = {https://mlanthology.org/cvpr/2021/karkus2021cvpr-differentiable/}
}