Edge-Preserving Photometric Stereo via Depth Fusion

Abstract

We present a sensor fusion scheme that combines active stereo with photometric stereo. Aiming at capturing full-frame depth for dynamic scenes at a minimum of three lighting conditions, we formulate an iterative optimization scheme that (1) adaptively adjusts the contribution from photometric stereo so that discontinuity can be preserved; (2) detects shadow areas by checking the visibility of the estimated point with respect to the light source, instead of using image-based heuristics; and (3) behaves well for ill-conditioned pixels that are under shadow, which are inevitable in almost any scene. Furthermore, we decompose our non-linear cost function into subproblems that can be optimized efficiently using linear techniques. Experiments show significantly improved results over the previous state-of-the-art in sensor fusion.

Cite

Text

Zhang et al. "Edge-Preserving Photometric Stereo via Depth Fusion." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2012. doi:10.1109/CVPR.2012.6247962

Markdown

[Zhang et al. "Edge-Preserving Photometric Stereo via Depth Fusion." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2012.](https://mlanthology.org/cvpr/2012/zhang2012cvpr-edge/) doi:10.1109/CVPR.2012.6247962

BibTeX

@inproceedings{zhang2012cvpr-edge,
  title     = {{Edge-Preserving Photometric Stereo via Depth Fusion}},
  author    = {Zhang, Qing and Ye, Mao and Yang, Ruigang and Matsushita, Yasuyuki and Wilburn, Bennett and Yu, Huimin},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year      = {2012},
  pages     = {2472-2479},
  doi       = {10.1109/CVPR.2012.6247962},
  url       = {https://mlanthology.org/cvpr/2012/zhang2012cvpr-edge/}
}