BA-Net: Bridge Attention for Deep Convolutional Neural Networks

Abstract

In attention mechanism research, most existing methods are hard to utilize well the information of the neural network with high computing efficiency due to heavy feature compression in the attention layer. This paper proposes a simple and general approach named Bridge Attention to address this issue. As a new idea, BA-Net straightforwardly integrates features from previous layers and effectively promotes information interchange. Only simple strategies are employed for the model implementation, similar to the SENet. Moreover, after extensively investigating the effectiveness of different previous features, we discovered a simple and exciting insight that bridging all the convolution outputs inside each block with BN can obtain better attention to enhance the performance of neural networks. BA-Net is effective, stable, and easy to use. A comprehensive evaluation of computer vision tasks demonstrates that the proposed approach achieves better performance than the existing channel attention methods regarding accuracy and computing efficiency.

Cite

Text

Zhao et al. "BA-Net: Bridge Attention for Deep Convolutional Neural Networks." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-19803-8_18

Markdown

[Zhao et al. "BA-Net: Bridge Attention for Deep Convolutional Neural Networks." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/zhao2022eccv-banet/) doi:10.1007/978-3-031-19803-8_18

BibTeX

@inproceedings{zhao2022eccv-banet,
  title     = {{BA-Net: Bridge Attention for Deep Convolutional Neural Networks}},
  author    = {Zhao, Yue and Chen, Junzhou and Zhang, Zirui and Zhang, Ronghui},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2022},
  doi       = {10.1007/978-3-031-19803-8_18},
  url       = {https://mlanthology.org/eccv/2022/zhao2022eccv-banet/}
}