On the Privacy Risks of Spiking Neural Networks: A Membership Inference Analysis

Abstract

Spiking Neural Networks (SNNs) are increasingly explored for their energy efficiency and robustness in real-world applications, yet their privacy risks remain largely unexamined. In this work, we investigate the susceptibility of SNNs to Membership Inference Attacks (MIAs)—a major privacy threat where an adversary attempts to determine whether a given sample was part of the training dataset. While prior work suggests that SNNs may offer inherent robustness due to their discrete, event-driven nature, we find that its resilience diminishes as latency (T) increases. Furthermore, we introduce an input dropout strategy under black box setting, that significantly enhances membership inference in SNNs. Our findings challenge the assumption that SNNs are inherently more secure, and even though they are expected to be better, our results reveal that SNNs exhibit privacy vulnerabilities that are equally comparable to ANNs.

Cite

Text

Guan et al. "On the Privacy Risks of Spiking Neural Networks: A Membership Inference Analysis." ICLR 2025 Workshops: ICBINB, 2025.

Markdown

[Guan et al. "On the Privacy Risks of Spiking Neural Networks: A Membership Inference Analysis." ICLR 2025 Workshops: ICBINB, 2025.](https://mlanthology.org/iclrw/2025/guan2025iclrw-privacy/)

BibTeX

@inproceedings{guan2025iclrw-privacy,
  title     = {{On the Privacy Risks of Spiking Neural Networks: A Membership Inference Analysis}},
  author    = {Guan, Junyi and Sharma, Abhijith and Tian, Chong and Lahlou, Salem},
  booktitle = {ICLR 2025 Workshops: ICBINB},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/guan2025iclrw-privacy/}
}