F3Loc: Fusion and Filtering for Floorplan Localization

Abstract

In this paper we propose an efficient data-driven solution to self-localization within a floorplan. Floorplan data is readily available long-term persistent and inherently robust to changes in the visual appearance. Our method does not require retraining per map and location or demand a large database of images of the area of interest. We propose a novel probabilistic model consisting of an observation and a novel temporal filtering module. Operating internally with an efficient ray-based representation the observation module consists of a single and a multiview module to predict horizontal depth from images and fuses their results to benefit from advantages offered by either methodology. Our method operates on conventional consumer hardware and overcomes a common limitation of competing methods that often demand upright images. Our full system meets real-time requirements while outperforming the state-of-the-art by a significant margin.

Cite

Text

Chen et al. "F3Loc: Fusion and Filtering for Floorplan Localization." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.01707

Markdown

[Chen et al. "F3Loc: Fusion and Filtering for Floorplan Localization." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/chen2024cvpr-f3loc/) doi:10.1109/CVPR52733.2024.01707

BibTeX

@inproceedings{chen2024cvpr-f3loc,
  title     = {{F3Loc: Fusion and Filtering for Floorplan Localization}},
  author    = {Chen, Changan and Wang, Rui and Vogel, Christoph and Pollefeys, Marc},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {18029-18038},
  doi       = {10.1109/CVPR52733.2024.01707},
  url       = {https://mlanthology.org/cvpr/2024/chen2024cvpr-f3loc/}
}