MSVIT: Improving Spiking Vision Transformer Using Multi-Scale Attention Fusion

Abstract

The combination of Spiking Neural Networks (SNNs) with Vision Transformer architectures has attracted significant attention due to the great potential for energy-efficient and high-performance computing paradigms. However, a substantial performance gap still exists between SNN-based and ANN-based transformer architectures. While existing methods propose spiking self-attention mechanisms that are successfully combined with SNNs, the overall architectures proposed by these methods suffer from a bottleneck in effectively extracting features from different image scales. In this paper, we address this issue and propose MSVIT, a novel spike-driven Transformer architecture, which firstly uses multi-scale spiking attention (MSSA) to enrich the capability of spiking attention blocks. We validate our approach across various main data sets. The experimental results indicate that our MSVIT outperforms existing SNN-based models, positioning itself as a state-of-the-art solution among NN-transformer architectures. The codes are available at https://github.com/Nanhu-AI-Lab/MSViT.

Cite

Text

Hua et al. "MSVIT: Improving Spiking Vision Transformer Using Multi-Scale Attention Fusion." International Joint Conference on Artificial Intelligence, 2025. doi:10.24963/IJCAI.2025/601

Markdown

[Hua et al. "MSVIT: Improving Spiking Vision Transformer Using Multi-Scale Attention Fusion." International Joint Conference on Artificial Intelligence, 2025.](https://mlanthology.org/ijcai/2025/hua2025ijcai-msvit/) doi:10.24963/IJCAI.2025/601

BibTeX

@inproceedings{hua2025ijcai-msvit,
  title     = {{MSVIT: Improving Spiking Vision Transformer Using Multi-Scale Attention Fusion}},
  author    = {Hua, Wei and Zhou, Chenlin and Wu, Jibin and Chua, Yansong and Shu, Yangyang},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {5399-5407},
  doi       = {10.24963/IJCAI.2025/601},
  url       = {https://mlanthology.org/ijcai/2025/hua2025ijcai-msvit/}
}