Fusing Depth from Defocus and Stereo with Coded Apertures
Abstract
In this paper we propose a novel depth measurement method by fusing depth from defocus (DFD) and stereo. One of the problems of passive stereo method is the difficulty of finding correct correspondence between images when an object has a repetitive pattern or edges parallel to the epipolar line. On the other hand, the accuracy of DFD method is inherently limited by the effective diameter of the lens. Therefore, we propose the fusion of stereo method and DFD by giving different focus distances for left and right cameras of a stereo camera with coded apertures. Two types of depth cues, defocus and disparity, are naturally integrated by the magnification and phase shift of a single point spread function (PSF) per camera. In this paper we give the proof of the proportional relationship between the diameter of defocus and disparity which makes the calibration easy. We also show the outstanding performance of our method which has both advantages of two depth cues through simulation and actual experiments.
Cite
Text
Takeda et al. "Fusing Depth from Defocus and Stereo with Coded Apertures." Conference on Computer Vision and Pattern Recognition, 2013. doi:10.1109/CVPR.2013.34Markdown
[Takeda et al. "Fusing Depth from Defocus and Stereo with Coded Apertures." Conference on Computer Vision and Pattern Recognition, 2013.](https://mlanthology.org/cvpr/2013/takeda2013cvpr-fusing/) doi:10.1109/CVPR.2013.34BibTeX
@inproceedings{takeda2013cvpr-fusing,
title = {{Fusing Depth from Defocus and Stereo with Coded Apertures}},
author = {Takeda, Yuichi and Hiura, Shinsaku and Sato, Kosuke},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2013},
doi = {10.1109/CVPR.2013.34},
url = {https://mlanthology.org/cvpr/2013/takeda2013cvpr-fusing/}
}