Enabling Monocular Depth Perception at the Very Edge
Abstract
Depth estimation is crucial in several computer vision applications, and a recent trend aims at inferring such a cue from a single camera through computationally demanding CNNs — precluding their practical deployment in several application contexts characterized by low-power constraints. Purposely, we develop a tiny network tailored to microcontrollers, processing low-resolution images to obtain a coarse depth map of the observed scene. Our solution enables depth perception with minimal power requirements (a few hundreds of mW), accurately enough to pave the way to several high-level applications at-the-edge.
Cite
Text
Peluso et al. "Enabling Monocular Depth Perception at the Very Edge." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020. doi:10.1109/CVPRW50498.2020.00204Markdown
[Peluso et al. "Enabling Monocular Depth Perception at the Very Edge." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020.](https://mlanthology.org/cvprw/2020/peluso2020cvprw-enabling/) doi:10.1109/CVPRW50498.2020.00204BibTeX
@inproceedings{peluso2020cvprw-enabling,
title = {{Enabling Monocular Depth Perception at the Very Edge}},
author = {Peluso, Valentino and Cipolletta, Antonio and Calimera, Andrea and Poggi, Matteo and Tosi, Fabio and Aleotti, Filippo and Mattoccia, Stefano},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2020},
pages = {1581-1583},
doi = {10.1109/CVPRW50498.2020.00204},
url = {https://mlanthology.org/cvprw/2020/peluso2020cvprw-enabling/}
}