Spatial-Temporal Self-Attention for Asynchronous Spiking Neural Networks
Abstract
The brain-inspired spiking neural networks (SNNs) are receiving increasing attention due to their asynchronous event-driven characteristics and low power consumption. As attention mechanisms recently become an indispensable part of sequence dependence modeling, the combination of SNNs and attention mechanisms holds great potential for energy-efficient and high-performance computing paradigms. However, the existing works cannot benefit from both temporal-wise attention and the asynchronous characteristic of SNNs. To fully leverage the advantages of both SNNs and attention mechanisms, we propose an SNNs-based spatial-temporal self-attention (STSA) mechanism, which calculates the feature dependence across the time and space domains without destroying the asynchronous transmission properties of SNNs. To further improve the performance, we also propose a spatial-temporal relative position bias (STRPB) for STSA to consider the spatiotemporal position of spikes. Based on the STSA and STRPB, we construct a spatial-temporal spiking Transformer framework, named STS-Transformer, which is powerful and enables SNNs to work in an asynchronous event-driven manner. Extensive experiments are conducted on popular neuromorphic datasets and speech datasets, including DVS128 Gesture, CIFAR10-DVS, and Google Speech Commands, and our experimental results can outperform other state-of-the-art models.
Cite
Text
Wang et al. "Spatial-Temporal Self-Attention for Asynchronous Spiking Neural Networks." International Joint Conference on Artificial Intelligence, 2023. doi:10.24963/IJCAI.2023/344Markdown
[Wang et al. "Spatial-Temporal Self-Attention for Asynchronous Spiking Neural Networks." International Joint Conference on Artificial Intelligence, 2023.](https://mlanthology.org/ijcai/2023/wang2023ijcai-spatial/) doi:10.24963/IJCAI.2023/344BibTeX
@inproceedings{wang2023ijcai-spatial,
title = {{Spatial-Temporal Self-Attention for Asynchronous Spiking Neural Networks}},
author = {Wang, Yuchen and Shi, Kexin and Lu, Chengzhuo and Liu, Yuguo and Zhang, Malu and Qu, Hong},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2023},
pages = {3085-3093},
doi = {10.24963/IJCAI.2023/344},
url = {https://mlanthology.org/ijcai/2023/wang2023ijcai-spatial/}
}