CROSSFIRE: Camera Relocalization on Self-Supervised Features from an Implicit Representation

Abstract

Beyond novel view synthesis, Neural Radiance Fields are useful for applications that interact with the real world. In this paper, we use them as an implicit map of a given scene and propose a camera relocalization algorithm tailored for this representation. The proposed method enables to compute in real-time the precise position of a device using a single RGB camera, during its navigation. In contrast with previous work, we do not rely on pose regression or photometric alignment but rather use dense local features obtained through volumetric rendering which are specialized on the scene with a self-supervised objective. As a result, our algorithm is more accurate than competitors, able to operate in dynamic outdoor environments with changing lightning conditions and can be readily integrated in any volumetric neural renderer.

Cite

Text

Moreau et al. "CROSSFIRE: Camera Relocalization on Self-Supervised Features from an Implicit Representation." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.00030

Markdown

[Moreau et al. "CROSSFIRE: Camera Relocalization on Self-Supervised Features from an Implicit Representation." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/moreau2023iccv-crossfire/) doi:10.1109/ICCV51070.2023.00030

BibTeX

@inproceedings{moreau2023iccv-crossfire,
  title     = {{CROSSFIRE: Camera Relocalization on Self-Supervised Features from an Implicit Representation}},
  author    = {Moreau, Arthur and Piasco, Nathan and Bennehar, Moussab and Tsishkou, Dzmitry and Stanciulescu, Bogdan and de La Fortelle, Arnaud},
  booktitle = {International Conference on Computer Vision},
  year      = {2023},
  pages     = {252-262},
  doi       = {10.1109/ICCV51070.2023.00030},
  url       = {https://mlanthology.org/iccv/2023/moreau2023iccv-crossfire/}
}