NAM: Normalization-Based Attention Module

Abstract

Recognizing less salient features is the key for model compression. However, it has not been investigated in the revolutionary attention mechanisms. In this work, we propose a novel normalization-based attention module (NAM), which suppresses less salient weights. It applies a weight sparsity penalty to the attention modules, thus, making them more computational efficient while retaining similar performance. A comparison with three other attention mechanisms on both Resnet and Mobilenet indicates that our method results in higher accuracy. Code for this paper can be publicly accessed at \url{https://github.com/Christian-lyc/NAM}.

Cite

Text

Liu et al. "NAM: Normalization-Based Attention Module." NeurIPS 2021 Workshops: ImageNet_PPF, 2021.

Markdown

[Liu et al. "NAM: Normalization-Based Attention Module." NeurIPS 2021 Workshops: ImageNet_PPF, 2021.](https://mlanthology.org/neuripsw/2021/liu2021neuripsw-nam/)

BibTeX

@inproceedings{liu2021neuripsw-nam,
  title     = {{NAM: Normalization-Based Attention Module}},
  author    = {Liu, Yichao and Shao, Zongru and Teng, Yueyang and Hoffmann, Nico},
  booktitle = {NeurIPS 2021 Workshops: ImageNet_PPF},
  year      = {2021},
  url       = {https://mlanthology.org/neuripsw/2021/liu2021neuripsw-nam/}
}