A Survey on Model-Free Goal Recognition

Abstract

Federated Graph Neural Network (FedGNN) integrate federated learning (FL) with graph neural networks (GNNs) to enable privacy-preserving training on distributed graph data. Vertical Federated Graph Neural Network (VFGNN), a key branch of FedGNN, handles scenarios where data features and labels are distributed among participants. Despite the robust privacy-preserving design of VFGNN, we have found that it still faces the risk of backdoor attacks, even in situations where labels are inaccessible. This paper proposes BVG, a novel backdoor attack method that leverages multi-hop triggers and backdoor retention, requiring only four target-class nodes to execute effective attacks. Experimental results demonstrate that BVG achieves nearly 100% attack success rates across three commonly used datasets and three GNN models, with minimal impact on the main task accuracy. We also evaluated various defense methods, and the BVG method maintained high attack effectiveness even under existing defenses. This finding highlights the need for advanced defense mechanisms to counter sophisticated backdoor attacks in practical VFGNN applications.

Cite

Text

Amado et al. "A Survey on Model-Free Goal Recognition." International Joint Conference on Artificial Intelligence, 2024. doi:10.24963/ijcai.2024/877

Markdown

[Amado et al. "A Survey on Model-Free Goal Recognition." International Joint Conference on Artificial Intelligence, 2024.](https://mlanthology.org/ijcai/2024/amado2024ijcai-survey/) doi:10.24963/ijcai.2024/877

BibTeX

@inproceedings{amado2024ijcai-survey,
  title     = {{A Survey on Model-Free Goal Recognition}},
  author    = {Amado, Leonardo and Shainkopf, Sveta Paster and Pereira, Ramon Fraga and Mirsky, Reuth and Meneguzzi, Felipe},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {7923-7931},
  doi       = {10.24963/ijcai.2024/877},
  url       = {https://mlanthology.org/ijcai/2024/amado2024ijcai-survey/}
}