Deep Patch Visual SLAM
Abstract
Recent work in Visual Odometry and SLAM has shown the effectiveness of using deep network backbones. Despite excellent accuracy, such approaches are often expensive to run or do not generalize well zero-shot. To address this problem, we introduce Deep Patch Visual-SLAM, a new system for monocular visual SLAM based on the DPVO visual odometry system. We introduce two loop closure mechanisms which significantly improve the accuracy with minimal runtime and memory overhead. On real-world datasets, DPV-SLAM runs at 1x-3x real-time framerates. We achieve comparable accuracy to DROID-SLAM on EuRoC and TartanAir while running twice as fast using a third of the VRAM. We also outperform DROID-SLAM by large margins on KITTI. As DPV-SLAM is an extension to DPVO, its code can be found in the same repository: https: //github.com/princeton-vl/DPVO
Cite
Text
Lipson et al. "Deep Patch Visual SLAM." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72627-9_24Markdown
[Lipson et al. "Deep Patch Visual SLAM." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/lipson2024eccv-deep/) doi:10.1007/978-3-031-72627-9_24BibTeX
@inproceedings{lipson2024eccv-deep,
title = {{Deep Patch Visual SLAM}},
author = {Lipson, Lahav and Teed, Zachary and Deng, Jia},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-72627-9_24},
url = {https://mlanthology.org/eccv/2024/lipson2024eccv-deep/}
}