Inferring Super-Resolution Depth from a Moving Light-Source Enhanced RGB-D Sensor: A Variational Approach
Abstract
A novel approach towards depth map super-resolution using multi-view uncalibrated photometric stereo is presented. Practically, an LED light source is attached to a commodity RGB-D sensor and is used to capture objects from multiple viewpoints with unknown motion. This non-static camera-to-object setup is described with a nonconvex variational approach such that no calibration on lighting or camera motion is require due to the formulation of an end-to-end joint optimization problem. Solving the proposed variational model results in high resolution depth, reflectance and camera estimates, as we show on challenging synthetic and real-world datasets.
Cite
Text
Sang et al. "Inferring Super-Resolution Depth from a Moving Light-Source Enhanced RGB-D Sensor: A Variational Approach." Winter Conference on Applications of Computer Vision, 2020.Markdown
[Sang et al. "Inferring Super-Resolution Depth from a Moving Light-Source Enhanced RGB-D Sensor: A Variational Approach." Winter Conference on Applications of Computer Vision, 2020.](https://mlanthology.org/wacv/2020/sang2020wacv-inferring/)BibTeX
@inproceedings{sang2020wacv-inferring,
title = {{Inferring Super-Resolution Depth from a Moving Light-Source Enhanced RGB-D Sensor: A Variational Approach}},
author = {Sang, Lu and Haefner, Bjoern and Cremers, Daniel},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2020},
url = {https://mlanthology.org/wacv/2020/sang2020wacv-inferring/}
}