Ego-Semantic Labeling of Scene from Depth Image for Visually Impaired and Blind People
Abstract
This work is devoted to scene understanding and motion ability improvement for visually impaired and blind people. We investigate how to exploit egocentric vision to provide semantic labeling of scene from head-mounted depth camera. More specifically, we propose a new method for locating ground from depth image whatever the camera's pose. The rest of planes of the scene are located using RANSAC method, semantically coded by their attributes and mapped as cylinders into a generated 3D scene which will serve as a feedback to users. Experiments are conducted and the obtained results are discussed.
Cite
Text
Zatout et al. "Ego-Semantic Labeling of Scene from Depth Image for Visually Impaired and Blind People." IEEE/CVF International Conference on Computer Vision Workshops, 2019. doi:10.1109/ICCVW.2019.00538Markdown
[Zatout et al. "Ego-Semantic Labeling of Scene from Depth Image for Visually Impaired and Blind People." IEEE/CVF International Conference on Computer Vision Workshops, 2019.](https://mlanthology.org/iccvw/2019/zatout2019iccvw-egosemantic/) doi:10.1109/ICCVW.2019.00538BibTeX
@inproceedings{zatout2019iccvw-egosemantic,
title = {{Ego-Semantic Labeling of Scene from Depth Image for Visually Impaired and Blind People}},
author = {Zatout, Chayma and Larabi, Slimane and Mendili, Ilyes and Barnabé, Soedji Ablam Edoh},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2019},
pages = {4376-4384},
doi = {10.1109/ICCVW.2019.00538},
url = {https://mlanthology.org/iccvw/2019/zatout2019iccvw-egosemantic/}
}