Adversarial Robustness in Graph Neural Networks: A Hamiltonian Approach
Abstract
Graph neural networks (GNNs) are vulnerable to adversarial perturbations, including those that affect both node features and graph topology. This paper investigates GNNs derived from diverse neural flows, concentrating on their connection to various stability notions such as BIBO stability, Lyapunov stability, structural stability, and conservative stability. We argue that Lyapunov stability, despite its common use, does not necessarily ensure adversarial robustness. Inspired by physics principles, we advocate for the use of conservative Hamiltonian neural flows to construct GNNs that are robust to adversarial attacks. The adversarial robustness of different neural flow GNNs is empirically compared on several benchmark datasets under a variety of adversarial attacks. Extensive numerical experiments demonstrate that GNNs leveraging conservative Hamiltonian flows with Lyapunov stability substantially improve robustness against adversarial perturbations. The implementation code of experiments is available at \url{https://github.com/zknus/NeurIPS-2023-HANG-Robustness}.
Cite
Text
Zhao et al. "Adversarial Robustness in Graph Neural Networks: A Hamiltonian Approach." Neural Information Processing Systems, 2023.Markdown
[Zhao et al. "Adversarial Robustness in Graph Neural Networks: A Hamiltonian Approach." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/zhao2023neurips-adversarial/)BibTeX
@inproceedings{zhao2023neurips-adversarial,
title = {{Adversarial Robustness in Graph Neural Networks: A Hamiltonian Approach}},
author = {Zhao, Kai and Kang, Qiyu and Song, Yang and She, Rui and Wang, Sijie and Tay, Wee Peng},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/zhao2023neurips-adversarial/}
}