Attentive Normalization
Abstract
In state-of-the-art deep neural networks, both feature normalization and feature attention have become ubiquitous with significant performance improvement shown in a vast amount of tasks. They are usually studied as separate modules, however. In this paper, we propose a light-weight integration between the two schema. We present Attentive Normalization (AN). Instead of learning a single affine transformation, AN learns a mixture of affine transformations and utilizes their weighted-sum as the final affine transformation applied to re-calibrate features in an instance-specific way. The weights are learned by leveraging channel-wise feature attention. In experiments, we test the proposed AN using four representative neural architectures (ResNets, DenseNets, MobileNets-v2 and AOGNets) in the ImageNet-1000 classification benchmark and the MS-COCO 2017 object detection and instance segmentation benchmark. AN obtains consistent performance improvement for different neural architectures in both benchmarks with absolute increase of top-1 accuracy in ImageNet-1000 between 0.5% and 2.7%, and absolute increase up to 1.8% and 2.2% for bounding box and mask AP in MS-COCO respectively. We observe that the proposed AN provides a strong alternative to the widely used Squeeze-and-Excitation (SE) module. Our reproducible source code is publicly available.
Cite
Text
Li et al. "Attentive Normalization." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58520-4_5Markdown
[Li et al. "Attentive Normalization." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/li2020eccv-attentive/) doi:10.1007/978-3-030-58520-4_5BibTeX
@inproceedings{li2020eccv-attentive,
title = {{Attentive Normalization}},
author = {Li, Xilai and Sun, Wei and Wu, Tianfu},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58520-4_5},
url = {https://mlanthology.org/eccv/2020/li2020eccv-attentive/}
}