Depth from Defocus in the Wild

Abstract

We consider the problem of two-frame depth from defocus in conditions unsuitable for existing methods yet typical of everyday photography: a handheld cellphone camera, a small aperture, a non-stationary scene and sparse surface texture. Our approach combines a global analysis of image content---3D surfaces, deformations, figure-ground relations, textures---with local estimation of joint depth-flow likelihoods in tiny patches. To enable local estimation we (1) derive novel defocus-equalization filters that induce brightness constancy across frames and (2) impose a tight upper bound on defocus blur---just three pixels in radius---through an appropriate choice of the second frame. For global analysis we use a novel piecewise-spline scene representation that can propagate depth and flow across large irregularly-shaped regions. Our experiments show that this combination preserves sharp boundaries and yields good depth and flow maps in the face of significant noise, uncertainty, non-rigidity, and data sparsity.

Cite

Text

Tang et al. "Depth from Defocus in the Wild." Conference on Computer Vision and Pattern Recognition, 2017. doi:10.1109/CVPR.2017.507

Markdown

[Tang et al. "Depth from Defocus in the Wild." Conference on Computer Vision and Pattern Recognition, 2017.](https://mlanthology.org/cvpr/2017/tang2017cvpr-depth/) doi:10.1109/CVPR.2017.507

BibTeX

@inproceedings{tang2017cvpr-depth,
  title     = {{Depth from Defocus in the Wild}},
  author    = {Tang, Huixuan and Cohen, Scott and Price, Brian and Schiller, Stephen and Kutulakos, Kiriakos N.},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2017},
  doi       = {10.1109/CVPR.2017.507},
  url       = {https://mlanthology.org/cvpr/2017/tang2017cvpr-depth/}
}