Deep Self-Correlation Descriptor for Dense Cross-Modal Correspondence
Abstract
We present a novel descriptor, called deep self-correlation (DSC), designed for establishing dense correspondences between images taken under different imaging modalities, such as different spectral ranges or lighting conditions. Motivated by local self-similarity (LSS), we formulate a novel descriptor by leveraging LSS in a deep architecture, leading to better discriminative power and greater robustness to non-rigid image deformations than state-of-the-art descriptors. The DSC first computes self-correlation surfaces over a local support window for randomly sampled patches, and then builds hierarchical self-correlation surfaces by performing an average pooling within a deep architecture. Finally, the feature responses on the self-correlation surfaces are encoded through a spatial pyramid pooling in a circular configuration. In contrast to convolutional neural networks (CNNs) based descriptors, the DSC is training-free, is robust to cross-modal imaging, and can be densely computed in an efficient manner that significantly reduces computational redundancy. The state-of-the-art performance of DSC on challenging cases of cross-modal image pairs is demonstrated through extensive experiments.
Cite
Text
Kim et al. "Deep Self-Correlation Descriptor for Dense Cross-Modal Correspondence." European Conference on Computer Vision, 2016. doi:10.1007/978-3-319-46484-8_41Markdown
[Kim et al. "Deep Self-Correlation Descriptor for Dense Cross-Modal Correspondence." European Conference on Computer Vision, 2016.](https://mlanthology.org/eccv/2016/kim2016eccv-deep/) doi:10.1007/978-3-319-46484-8_41BibTeX
@inproceedings{kim2016eccv-deep,
title = {{Deep Self-Correlation Descriptor for Dense Cross-Modal Correspondence}},
author = {Kim, Seungryong and Min, Dongbo and Lin, Stephen and Sohn, Kwanghoon},
booktitle = {European Conference on Computer Vision},
year = {2016},
pages = {679-695},
doi = {10.1007/978-3-319-46484-8_41},
url = {https://mlanthology.org/eccv/2016/kim2016eccv-deep/}
}