Depth mAP Super-Resolution by Deep Multi-Scale Guidance

Abstract

Depth boundaries often lose sharpness when upsampling from low-resolution (LR) depth maps especially at large upscaling factors. We present a new method to address the problem of depth map super resolution in which a high-resolution (HR) depth map is inferred from a LR depth map and an additional HR intensity image of the same scene. We propose a Multi-Scale Guided convolutional network (MSG-Net) for depth map super resolution. MSG-Net complements LR depth features with HR intensity features using a multi-scale fusion strategy. Such a multi-scale guidance allows the network to better adapt for upsampling of both fine- and large-scale structures. Specifically, the rich hierarchical HR intensity features at different levels progressively resolve ambiguity in depth map upsampling. Moreover, we employ a high-frequency domain training method to not only reduce training time but also facilitate the fusion of depth and intensity features. With the multi-scale guidance, MSG-Net achieves state-of-art performance for depth map upsampling.

Cite

Text

Hui et al. "Depth mAP Super-Resolution by Deep Multi-Scale Guidance." European Conference on Computer Vision, 2016. doi:10.1007/978-3-319-46487-9_22

Markdown

[Hui et al. "Depth mAP Super-Resolution by Deep Multi-Scale Guidance." European Conference on Computer Vision, 2016.](https://mlanthology.org/eccv/2016/hui2016eccv-depth/) doi:10.1007/978-3-319-46487-9_22

BibTeX

@inproceedings{hui2016eccv-depth,
  title     = {{Depth mAP Super-Resolution by Deep Multi-Scale Guidance}},
  author    = {Hui, Tak-Wai and Loy, Chen Change and Tang, Xiaoou},
  booktitle = {European Conference on Computer Vision},
  year      = {2016},
  pages     = {353-369},
  doi       = {10.1007/978-3-319-46487-9_22},
  url       = {https://mlanthology.org/eccv/2016/hui2016eccv-depth/}
}