Fusion of Inertial and Visual Measurements for RGB-D SLAM on Mobile Devices
Abstract
Simultaneous Localization and Mapping (SLAM) algorithms have been recently deployed on mobile devices, where they can enable a broad range of novel applications. Nevertheless, pure visual SLAM is inherently weak at operating in environments with a reduced number of visual features. Indeed, even many recent proposals based on RGB-D sensors cannot handle properly such scenarios, as several steps of the algorithms are based on matching visual features. In this work we propose a framework suitable for mobile platforms to fuse pose estimations attained from visual and inertial measurements, with the aim of extending the range of scenarios addressable by mobile visual SLAM. The framework deploys an array of Kalman filters where the careful selection of the state variables and the preprocessing of the inertial sensor measurements result in a simple and effective data fusion process. We present qualitative and quantitative experiments to show the improved SLAM performance delivered by the proposed approach.
Cite
Text
Brunetto et al. "Fusion of Inertial and Visual Measurements for RGB-D SLAM on Mobile Devices." IEEE/CVF International Conference on Computer Vision Workshops, 2015. doi:10.1109/ICCVW.2015.29Markdown
[Brunetto et al. "Fusion of Inertial and Visual Measurements for RGB-D SLAM on Mobile Devices." IEEE/CVF International Conference on Computer Vision Workshops, 2015.](https://mlanthology.org/iccvw/2015/brunetto2015iccvw-fusion/) doi:10.1109/ICCVW.2015.29BibTeX
@inproceedings{brunetto2015iccvw-fusion,
title = {{Fusion of Inertial and Visual Measurements for RGB-D SLAM on Mobile Devices}},
author = {Brunetto, Nicholas and Salti, Samuele and Fioraio, Nicola and Cavallari, Tommaso and Di Stefano, Luigi},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2015},
pages = {148-156},
doi = {10.1109/ICCVW.2015.29},
url = {https://mlanthology.org/iccvw/2015/brunetto2015iccvw-fusion/}
}