Toward Robust Spiking Neural Network Against Adversarial Perturbation

Abstract

As spiking neural networks (SNNs) are deployed increasingly in real-world efficiency critical applications, the security concerns in SNNs attract more attention.Currently, researchers have already demonstrated an SNN can be attacked with adversarial examples. How to build a robust SNN becomes an urgent issue.Recently, many studies apply certified training in artificial neural networks (ANNs), which can improve the robustness of an NN model promisely. However, existing certifications cannot transfer to SNNs directly because of the distinct neuron behavior and input formats for SNNs. In this work, we first design S-IBP and S-CROWN that tackle the non-linear functions in SNNs' neuron modeling. Then, we formalize the boundaries for both digital and spike inputs. Finally, we demonstrate the efficiency of our proposed robust training method in different datasets and model architectures. Based on our experiment, we can achieve a maximum $37.7\%$ attack error reduction with $3.7\%$ original accuracy loss. To the best of our knowledge, this is the first analysis on robust training of SNNs.

Cite

Text

Liang et al. "Toward Robust Spiking Neural Network Against Adversarial Perturbation." Neural Information Processing Systems, 2022.

Markdown

[Liang et al. "Toward Robust Spiking Neural Network Against Adversarial Perturbation." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/liang2022neurips-robust/)

BibTeX

@inproceedings{liang2022neurips-robust,
  title     = {{Toward Robust Spiking Neural Network Against Adversarial Perturbation}},
  author    = {Liang, Ling and Xu, Kaidi and Hu, Xing and Deng, Lei and Xie, Yuan},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/liang2022neurips-robust/}
}