Multi-Scale Spatially-Asymmetric Recalibration for Image Classification
Abstract
Convolution is spatially-symmetric, i.e., the visual features are independent of its position in the image, which limits its ability to use spatial information. This paper addresses this issue by a recalibration process, which refers to the surrounding region of each neuron, computes an importance value and multiplies it to the original neural response. Our approach is named multi-scale spatially-asymmetric recalibration (MS-SAR), which, besides introducing spatial asymmetry into convolution, extracts visual cues from regions at multiple scales to allow richer information to be incorporated. MS-SAR is implemented in an efficient way, so that only small fractions of extra parameters and computations are required. We apply MS-SAR to several popular network architectures, in which all convolutional layers are recalibrated, and demonstrate superior performance in both CIFAR and ILSVRC2012 classification tasks.
Cite
Text
Wang et al. "Multi-Scale Spatially-Asymmetric Recalibration for Image Classification." Proceedings of the European Conference on Computer Vision (ECCV), 2018. doi:10.1007/978-3-030-01261-8_31Markdown
[Wang et al. "Multi-Scale Spatially-Asymmetric Recalibration for Image Classification." Proceedings of the European Conference on Computer Vision (ECCV), 2018.](https://mlanthology.org/eccv/2018/wang2018eccv-multiscale/) doi:10.1007/978-3-030-01261-8_31BibTeX
@inproceedings{wang2018eccv-multiscale,
title = {{Multi-Scale Spatially-Asymmetric Recalibration for Image Classification}},
author = {Wang, Yan and Xie, Lingxi and Qiao, Siyuan and Zhang, Ya and Zhang, Wenjun and Yuille, Alan L.},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2018},
doi = {10.1007/978-3-030-01261-8_31},
url = {https://mlanthology.org/eccv/2018/wang2018eccv-multiscale/}
}