Spiking Point Transformer for Point Cloud Classification
Abstract
Spiking Neural Networks (SNNs) offer an attractive and energy-efficient alternative to conventional Artificial Neural Networks (ANNs) due to their sparse binary activation. When SNN meets Transformer, it shows great potential in 2D image processing. However, their application for 3D point cloud remains underexplored. To this end, we present Spiking Point Transformer (SPT), the first transformer-based SNN framework for point cloud classification. Specifically, we first design Queue-Driven Sampling Direct Encoding for point cloud to reduce computational costs while retaining the most effective support points at each time step. We introduce the Hybrid Dynamics Integrate-and-Fire Neuron (HD-IF), designed to simulate selective neuron activation and reduce over-reliance on specific artificial neurons. SPT attains state-of-the-art results on three benchmark datasets that span both real-world and synthetic datasets in the SNN domain. Meanwhile, the theoretical energy consumption of SPT is at least 6.4x less than its ANN counterpart.
Cite
Text
Wu et al. "Spiking Point Transformer for Point Cloud Classification." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I20.35459Markdown
[Wu et al. "Spiking Point Transformer for Point Cloud Classification." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/wu2025aaai-spiking/) doi:10.1609/AAAI.V39I20.35459BibTeX
@inproceedings{wu2025aaai-spiking,
title = {{Spiking Point Transformer for Point Cloud Classification}},
author = {Wu, Peixi and Chai, Bosong and Li, Hebei and Zheng, Menghua and Peng, Yansong and Wang, Zeyu and Nie, Xuan and Zhang, Yueyi and Sun, Xiaoyan},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {21563-21571},
doi = {10.1609/AAAI.V39I20.35459},
url = {https://mlanthology.org/aaai/2025/wu2025aaai-spiking/}
}