DROID-SLAM: Deep Visual SLAM for Monocular, Stereo, and RGB-D Cameras
Abstract
We introduce DROID-SLAM, a new deep learning based SLAM system. DROID-SLAM consists of recurrent iterative updates of camera pose and pixelwise depth through a Dense Bundle Adjustment layer. DROID-SLAM is accurate, achieving large improvements over prior work, and robust, suffering from substantially fewer catastrophic failures. Despite training on monocular video, it can leverage stereo or RGB-D video to achieve improved performance at test time. The URL to our open source code is https://github.com/princeton-vl/DROID-SLAM.
Cite
Text
Teed and Deng. "DROID-SLAM: Deep Visual SLAM for Monocular, Stereo, and RGB-D Cameras." Neural Information Processing Systems, 2021.Markdown
[Teed and Deng. "DROID-SLAM: Deep Visual SLAM for Monocular, Stereo, and RGB-D Cameras." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/teed2021neurips-droidslam/)BibTeX
@inproceedings{teed2021neurips-droidslam,
title = {{DROID-SLAM: Deep Visual SLAM for Monocular, Stereo, and RGB-D Cameras}},
author = {Teed, Zachary and Deng, Jia},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/teed2021neurips-droidslam/}
}