Point-SLAM: Dense Neural Point Cloud-Based SLAM

Abstract

We propose a dense neural simultaneous localization and mapping (SLAM) approach for monocular RGBD input which anchors the features of a neural scene representation in a point cloud that is iteratively generated in an input-dependent data-driven manner. We demonstrate that both tracking and mapping can be performed with the same point-based neural scene representation by minimizing an RGBD-based re-rendering loss. In contrast to recent dense neural SLAM methods which anchor the scene features in a sparse grid, our point-based approach allows to dynamically adapt the anchor point density to the information density of the input. This strategy reduces runtime and memory usage in regions with fewer details and dedicates higher point density to resolve fine details. Our approach performs either better or competitive to existing dense neural RGBD SLAM methods in tracking, mapping and rendering accuracy on the Replica, TUM-RGBD and ScanNet datasets.

Cite

Text

Sandström et al. "Point-SLAM: Dense Neural Point Cloud-Based SLAM." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.01690

Markdown

[Sandström et al. "Point-SLAM: Dense Neural Point Cloud-Based SLAM." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/sandstrom2023iccv-pointslam/) doi:10.1109/ICCV51070.2023.01690

BibTeX

@inproceedings{sandstrom2023iccv-pointslam,
  title     = {{Point-SLAM: Dense Neural Point Cloud-Based SLAM}},
  author    = {Sandström, Erik and Li, Yue and Van Gool, Luc and Oswald, Martin R.},
  booktitle = {International Conference on Computer Vision},
  year      = {2023},
  pages     = {18433-18444},
  doi       = {10.1109/ICCV51070.2023.01690},
  url       = {https://mlanthology.org/iccv/2023/sandstrom2023iccv-pointslam/}
}