DeepSmooth: Efficient and Smooth Depth Completion
Abstract
Accurate and consistent depth maps are essential for numerous applications across domains such as robotics, Augmented Reality and others. High-quality depth maps that are spatially and temporally consistent enable tasks such as Spatial Mapping, Video Portrait effects and more generally, 3D Scene Understanding. Depth data acquired from sensors is often incomplete and contains holes whereas depth estimated from RGB images can be inaccurate. This work focuses on Depth Completion, the task of filling holes in depth data using color images. Most work in depth completion formulates the task at the frame level, individually filling each frame’s depth. This results in undesirable flickering artifacts when the RGB-D video stream is viewed as a whole and has detrimental effects on downstream tasks. We propose DeepSmooth, a model that spatio-temporally propagates information to fill in depth maps. Using an EfficientNet and pseudo 3D-Conv based architecture, and a loss function which enforces consistency across space and time, the proposed solution produces smooth depth maps.
Cite
Text
Krishna and Vandrotti. "DeepSmooth: Efficient and Smooth Depth Completion." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023. doi:10.1109/CVPRW59228.2023.00338Markdown
[Krishna and Vandrotti. "DeepSmooth: Efficient and Smooth Depth Completion." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023.](https://mlanthology.org/cvprw/2023/krishna2023cvprw-deepsmooth/) doi:10.1109/CVPRW59228.2023.00338BibTeX
@inproceedings{krishna2023cvprw-deepsmooth,
title = {{DeepSmooth: Efficient and Smooth Depth Completion}},
author = {Krishna, Sriram and Vandrotti, Basavaraja Shanthappa},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2023},
pages = {3358-3367},
doi = {10.1109/CVPRW59228.2023.00338},
url = {https://mlanthology.org/cvprw/2023/krishna2023cvprw-deepsmooth/}
}