SafeUAV: Learning to Estimate Depth and Safe Landing Areas for UAVs from Synthetic Data
Abstract
The emergence of relatively low cost UAVs has prompted a global concern about the safe operation of such devices. Since most of them can ‘autonomously’ fly by means of GPS way-points, the lack of a higher logic for emergency scenarios leads to an abundance of incidents involving property or personal injury. In order to tackle this problem, we propose a small, embeddable ConvNet for both depth and safe landing area estimation. Furthermore, since labeled training data in the 3D aerial field is scarce and ground images are unsuitable, we capture a novel synthetic aerial 3D dataset obtained from 3D reconstructions. We use the synthetic data to learn to estimate depth from in-flight images and segment them into ‘safe-landing’ and ‘obstacle’ regions. Our experiments demonstrate compelling results in practice on both synthetic data and real RGB drone footage.
Cite
Text
Marcu et al. "SafeUAV: Learning to Estimate Depth and Safe Landing Areas for UAVs from Synthetic Data." European Conference on Computer Vision Workshops, 2018. doi:10.1007/978-3-030-11012-3_4Markdown
[Marcu et al. "SafeUAV: Learning to Estimate Depth and Safe Landing Areas for UAVs from Synthetic Data." European Conference on Computer Vision Workshops, 2018.](https://mlanthology.org/eccvw/2018/marcu2018eccvw-safeuav/) doi:10.1007/978-3-030-11012-3_4BibTeX
@inproceedings{marcu2018eccvw-safeuav,
title = {{SafeUAV: Learning to Estimate Depth and Safe Landing Areas for UAVs from Synthetic Data}},
author = {Marcu, Alina and Costea, Dragos and Licaret, Vlad and Pîrvu, Mihai Cristian and Slusanschi, Emil and Leordeanu, Marius},
booktitle = {European Conference on Computer Vision Workshops},
year = {2018},
pages = {43-58},
doi = {10.1007/978-3-030-11012-3_4},
url = {https://mlanthology.org/eccvw/2018/marcu2018eccvw-safeuav/}
}