Scene-Space Feature Detectors
Abstract
Derivative, spectral, and other operators implemented in filter kernels and applied with convolution have been well studied in image recognition problems. We explore in this paper a novel system that applies such filters in the actual 3D scene. The system projects images of filters into the scene as controlled illumination. The projected filters are subjected to object surface reflectance (BRDF) and scene composition; and the filter response is captured by camera pixels. Our approach utilizes a pixel-based convolution formulation, exploiting the intrinsic integration performed by individual pixel sensors and diffraction-limited optics. For example, we project a grid of Gaussian derivative filters as images and simply retrieve a filter response as a pixel value. This is a local (pixel-only) computation that is not limited by camera resolution. Our system can serve as a front-end to existing techniques for surface or geometry estimation from texture analysis. A projector and camera pair is the basic equipment requirement.
Cite
Text
Jean. "Scene-Space Feature Detectors." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2007. doi:10.1109/CVPR.2007.383362Markdown
[Jean. "Scene-Space Feature Detectors." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2007.](https://mlanthology.org/cvpr/2007/jean2007cvpr-scene/) doi:10.1109/CVPR.2007.383362BibTeX
@inproceedings{jean2007cvpr-scene,
title = {{Scene-Space Feature Detectors}},
author = {Jean, Yves},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2007},
doi = {10.1109/CVPR.2007.383362},
url = {https://mlanthology.org/cvpr/2007/jean2007cvpr-scene/}
}