Spatially-Varying Autofocus
Abstract
A lens brings a single plane into focus on a planar sensor; hence, parts of the scene that are outside this planar focus plane are resolved on the sensor under defocus. Can we break this precept by enabling a "lens" that can change its depth-of-field arbitrarily? This work investigates the design and implementation of such a computational lens with spatially-selective focusing. Our design uses an optical arrangement of a Lohmann lens and a phase-only spatial light modulator to allow each pixel to focus at a different depth. We extend classical techniques used in autofocusing to the spatially-varying scenario where the depth map is iteratively estimated using contrast and disparity cues, enabling the camera to progressively shape its depth-of-field to the scene's depth. By obtaining an optical all-in-focus image, our technique advances upon a broad swathe of prior work ranging from depth-from-focus/defocus to coded aperture techniques in two key aspects: the ability to bring an entire scene in focus simultaneously, and the ability to maintain the highest possible spatial resolution.
Cite
Text
Qin et al. "Spatially-Varying Autofocus." International Conference on Computer Vision, 2025.Markdown
[Qin et al. "Spatially-Varying Autofocus." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/qin2025iccv-spatiallyvarying/)BibTeX
@inproceedings{qin2025iccv-spatiallyvarying,
title = {{Spatially-Varying Autofocus}},
author = {Qin, Yingsi and Sankaranarayanan, Aswin C. and O'Toole, Matthew},
booktitle = {International Conference on Computer Vision},
year = {2025},
pages = {24645-24654},
url = {https://mlanthology.org/iccv/2025/qin2025iccv-spatiallyvarying/}
}