Local Convolutional Features with Unsupervised Training for Image Retrieval
Abstract
Patch-level descriptors underlie several important computer vision tasks, such as stereo-matching or content-based image retrieval. We introduce a deep convolutional architecture that yields patch-level descriptors, as an alternative to the popular SIFT descriptor for image retrieval. The proposed family of descriptors, called Patch-CKN, adapt the recently introduced Convolutional Kernel Network (CKN), an unsupervised framework to learn convolutional architectures. We present a comparison framework to benchmark current deep convolutional approaches along with Patch-CKN for both patch and image retrieval, including our novel ``RomePatches'' dataset. Patch-CKN descriptors yield competitive results compared to supervised CNN alternatives on patch and image retrieval.
Cite
Text
Paulin et al. "Local Convolutional Features with Unsupervised Training for Image Retrieval." International Conference on Computer Vision, 2015. doi:10.1109/ICCV.2015.19Markdown
[Paulin et al. "Local Convolutional Features with Unsupervised Training for Image Retrieval." International Conference on Computer Vision, 2015.](https://mlanthology.org/iccv/2015/paulin2015iccv-local/) doi:10.1109/ICCV.2015.19BibTeX
@inproceedings{paulin2015iccv-local,
title = {{Local Convolutional Features with Unsupervised Training for Image Retrieval}},
author = {Paulin, Mattis and Douze, Matthijs and Harchaoui, Zaid and Mairal, Julien and Perronin, Florent and Schmid, Cordelia},
booktitle = {International Conference on Computer Vision},
year = {2015},
doi = {10.1109/ICCV.2015.19},
url = {https://mlanthology.org/iccv/2015/paulin2015iccv-local/}
}