Weakly Supervised End2End Deep Visual Odometry
Abstract
Visual odometry is an ill-posed problem and utilized in many robotics applications, especially automated driving for mapless navigation. Recent applications have shown that deep models outperform traditional approaches especially in localization accuracy and furthermore significantly reduce catastrophic failures. The disadvantage of most of these models is a strong dependence on high-quantity and high-quality ground truth data. However, accurate and dense depth ground truth data for real world datasets is difficult to obtain. As a result, deep models are often trained on synthetic data which introduces a domain gap. We present a weakly supervised approach to overcome this limitation. Our approach uses estimated optical flow for training that can be generated without the need for high-quality dense depth ground truth. Instead, it only requires ground truth poses and raw camera images for training. In the experiments, we show that our approach enables deep visual odometry to be efficiently trained on the target domain (real data) while achieving state-of-the-art performance on the KITTI dataset.
Cite
Text
Abouee et al. "Weakly Supervised End2End Deep Visual Odometry." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024. doi:10.1109/CVPRW63382.2024.00091Markdown
[Abouee et al. "Weakly Supervised End2End Deep Visual Odometry." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024.](https://mlanthology.org/cvprw/2024/abouee2024cvprw-weakly/) doi:10.1109/CVPRW63382.2024.00091BibTeX
@inproceedings{abouee2024cvprw-weakly,
title = {{Weakly Supervised End2End Deep Visual Odometry}},
author = {Abouee, Amin and Ravi, Ashwanth and Hinneburg, Lars and Dziwulski, Mateusz and Ölsner, Florian and Hess, Jürgen and Milz, Stefan and Mäder, Patrick},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2024},
pages = {858-865},
doi = {10.1109/CVPRW63382.2024.00091},
url = {https://mlanthology.org/cvprw/2024/abouee2024cvprw-weakly/}
}