Fast Bilateral-Space Stereo for Synthetic Defocus
Abstract
Given a stereo pair it is possible to recover a depth map and use that depth to render a synthetically defocused image. Though stereo algorithms are well-studied, rarely are those algorithms considered solely in the context of producing these defocused renderings. In this paper we present a technique for efficiently producing disparity maps using a novel optimization framework in which inference is performed in "bilateral-space". Our approach produces higher-quality "defocus" results than other stereo algorithms while also being 10-100 times faster than comparable techniques.
Cite
Text
Barron et al. "Fast Bilateral-Space Stereo for Synthetic Defocus." Conference on Computer Vision and Pattern Recognition, 2015. doi:10.1109/CVPR.2015.7299076Markdown
[Barron et al. "Fast Bilateral-Space Stereo for Synthetic Defocus." Conference on Computer Vision and Pattern Recognition, 2015.](https://mlanthology.org/cvpr/2015/barron2015cvpr-fast/) doi:10.1109/CVPR.2015.7299076BibTeX
@inproceedings{barron2015cvpr-fast,
title = {{Fast Bilateral-Space Stereo for Synthetic Defocus}},
author = {Barron, Jonathan T. and Adams, Andrew and Shih, YiChang and Hernandez, Carlos},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2015},
doi = {10.1109/CVPR.2015.7299076},
url = {https://mlanthology.org/cvpr/2015/barron2015cvpr-fast/}
}