CBAM: Convolutional Block Attention Module

Abstract

We propose Convolutional Block Attention Module (CBAM), a simple and effective attention module that can be integrated with any feed-forward convolutional neural networks. Given an intermediate feature map, our module sequentially infers attention maps along two separate dimensions, channel and spatial, then the attention maps are multiplied to the input feature map for adaptive feature refinement. Because CBAM is a lightweight and general module, it can be integrated into any CNN architecture seamlessly with negligible overheads. Our module is end-to-end trainable along with base CNNs. We validate our CBAM through extensive experiments on ImageNet-1K, MS COCO detection, and VOC 2007 detection datasets. Our experiments show consistent improvements on classification and detection performances with various models, demonstrating the wide applicability of CBAM. The code and models will be publicly available upon the acceptance of the paper.

Cite

Text

Woo et al. "CBAM: Convolutional Block Attention Module." Proceedings of the European Conference on Computer Vision (ECCV), 2018. doi:10.1007/978-3-030-01234-2_1

Markdown

[Woo et al. "CBAM: Convolutional Block Attention Module." Proceedings of the European Conference on Computer Vision (ECCV), 2018.](https://mlanthology.org/eccv/2018/woo2018eccv-cbam/) doi:10.1007/978-3-030-01234-2_1

BibTeX

@inproceedings{woo2018eccv-cbam,
  title     = {{CBAM: Convolutional Block Attention Module}},
  author    = {Woo, Sanghyun and Park, Jongchan and Lee, Joon-Young and So Kweon, In},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2018},
  doi       = {10.1007/978-3-030-01234-2_1},
  url       = {https://mlanthology.org/eccv/2018/woo2018eccv-cbam/}
}