vMAP: Vectorised Object Mapping for Neural Field SLAM

Abstract

We present vMAP, an object-level dense SLAM system using neural field representations. Each object is represented by a small MLP, enabling efficient, watertight object modelling without the need for 3D priors. As an RGB-D camera browses a scene with no prior information, vMAP detects object instances on-the-fly, and dynamically adds them to its map. Specifically, thanks to the power of vectorised training, vMAP can optimise as many as 50 individual objects in a single scene, with an extremely efficient training speed of 5Hz map update. We experimentally demonstrate significantly improved scene-level and object-level reconstruction quality compared to prior neural field SLAM systems. Project page: https://kxhit.github.io/vMAP.

Cite

Text

Kong et al. "vMAP: Vectorised Object Mapping for Neural Field SLAM." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.00098

Markdown

[Kong et al. "vMAP: Vectorised Object Mapping for Neural Field SLAM." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/kong2023cvpr-vmap/) doi:10.1109/CVPR52729.2023.00098

BibTeX

@inproceedings{kong2023cvpr-vmap,
  title     = {{vMAP: Vectorised Object Mapping for Neural Field SLAM}},
  author    = {Kong, Xin and Liu, Shikun and Taher, Marwan and Davison, Andrew J.},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2023},
  pages     = {952-961},
  doi       = {10.1109/CVPR52729.2023.00098},
  url       = {https://mlanthology.org/cvpr/2023/kong2023cvpr-vmap/}
}