Reconstructing an Image from Its Local Descriptors
Abstract
This paper shows that an image can be approximately reconstructed based on the output of a blackbox local description software such as those classically used for image indexing. Our approach consists first in using an off-the-shelf image database to find patches that are visually similar to each region of interest of the unknown input image, according to associated local descriptors. These patches are then warped into input image domain according to interest region geometry and seamlessly stitched together. Final completion of still missing texture-free regions is obtained by smooth interpolation. As demonstrated in our experiments, visually meaningful reconstructions are obtained just based on image local descriptors like SIFT, provided the geometry of regions of interest is known. The reconstruction most often allows the clear interpretation of the semantic image content. As a result, this work raises critical issues of privacy and rights when local descriptors of photos or videos are given away for indexing and search purpose.
Cite
Text
Weinzaepfel et al. "Reconstructing an Image from Its Local Descriptors." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2011. doi:10.1109/CVPR.2011.5995616Markdown
[Weinzaepfel et al. "Reconstructing an Image from Its Local Descriptors." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2011.](https://mlanthology.org/cvpr/2011/weinzaepfel2011cvpr-reconstructing/) doi:10.1109/CVPR.2011.5995616BibTeX
@inproceedings{weinzaepfel2011cvpr-reconstructing,
title = {{Reconstructing an Image from Its Local Descriptors}},
author = {Weinzaepfel, Philippe and Jégou, Hervé and Pérez, Patrick},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2011},
pages = {337-344},
doi = {10.1109/CVPR.2011.5995616},
url = {https://mlanthology.org/cvpr/2011/weinzaepfel2011cvpr-reconstructing/}
}