Spiking Transformer: Introducing Accurate Addition-Only Spiking Self-Attention for Transformer
Abstract
Transformers have demonstrated outstanding performance across a wide range of tasks, owing to their self-attention mechanism, but they are highly energy-consuming. Spiking Neural Networks have emerged as a promising energy-efficient alternative to traditional Artificial Neural Networks, leveraging event-driven computation and binary spikes for information transfer. The combination of Transformers' capabilities with the energy efficiency of SNNs offers a compelling opportunity. This paper addresses the challenge of adapting the self-attention mechanism of Transformers to the spiking paradigm by introducing a novel approach: Accurate Addition-Only Spiking Self-Attention (A^2OS^2A). Unlike existing methods that rely solely on binary spiking neurons for all components of the self-attention mechanism, our approach integrates binary, ReLU, and ternary spiking neurons. This hybrid strategy significantly improves accuracy while preserving non-multiplicative computations. Moreover, our method eliminates the need for softmax and scaling operations. Extensive experiments show that the A^2OS^2A-based Spiking Transformer outperforms existing SNN-based Transformers on several datasets, even achieving an accuracy of 78.66% on ImageNet-1K. Our work represents a significant advancement in SNN-based Transformer models, offering a more accurate and efficient solution for real-world applications.
Cite
Text
Guo et al. "Spiking Transformer: Introducing Accurate Addition-Only Spiking Self-Attention for Transformer." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.02272Markdown
[Guo et al. "Spiking Transformer: Introducing Accurate Addition-Only Spiking Self-Attention for Transformer." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/guo2025cvpr-spiking/) doi:10.1109/CVPR52734.2025.02272BibTeX
@inproceedings{guo2025cvpr-spiking,
title = {{Spiking Transformer: Introducing Accurate Addition-Only Spiking Self-Attention for Transformer}},
author = {Guo, Yufei and Liu, Xiaode and Chen, Yuanpei and Peng, Weihang and Zhang, Yuhan and Ma, Zhe},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2025},
pages = {24398-24408},
doi = {10.1109/CVPR52734.2025.02272},
url = {https://mlanthology.org/cvpr/2025/guo2025cvpr-spiking/}
}