Visual Navigation Aid for the Blind in Dynamic Environments
Abstract
We describe a robust method to estimate egomotion in highly dynamic environments. Our application is a head mounted stereo system designed to help the visually impaired navigate. Instead of computing egomotion from 3D point correspondences in consecutive frames, we propose to find the ground plane, then decompose the 6DoF egomotion into the motion of the ground plane, and a planar motion on the ground plane. The ground plane is estimated at each frame by analysis of the disparity array. Next, we estimate the normal to the ground plane. This is done either from the visual data, or from the IMU reading. We evaluate the results on both synthetic and real scenes, and compare the results of the direct, 6 DoF estimate with our plane-based approach, with and without the IMU. We conclude that the egomotion estimation using this new approach produces significantly better results, both in simulation and on real data sets.
Cite
Text
Leung and Medioni. "Visual Navigation Aid for the Blind in Dynamic Environments." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2014. doi:10.1109/CVPRW.2014.89Markdown
[Leung and Medioni. "Visual Navigation Aid for the Blind in Dynamic Environments." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2014.](https://mlanthology.org/cvprw/2014/leung2014cvprw-visual/) doi:10.1109/CVPRW.2014.89BibTeX
@inproceedings{leung2014cvprw-visual,
title = {{Visual Navigation Aid for the Blind in Dynamic Environments}},
author = {Leung, Tung-Sing and Medioni, Gérard G.},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2014},
pages = {579-586},
doi = {10.1109/CVPRW.2014.89},
url = {https://mlanthology.org/cvprw/2014/leung2014cvprw-visual/}
}