Node-Level Differentially Private Graph Neural Networks

Abstract

Graph Neural Networks (GNNs) are a popular technique for modelling graph-structured data and computing node-level representations via aggregation of information from the neighborhood of each node. However, this aggregation implies increased risk of revealing sensitive information, as a node can participate in the inference for multiple nodes. This implies that standard privacy preserving machine learning techniques, such as differentially private stochastic gradient descent (DP-SGD) - which are designed for situations where each data point participates in the inference for one point only - either do not apply, or lead to inaccurate solutions. In this work, we formally define the problem of learning GNN parameters with node-level privacy, and provide an algorithmic solution with a strong differential privacy guarantee. We employ a careful sensitivity analysis and provide a non-trivial extension of the privacy-by-amplification technique. An empirical evaluation on standard benchmarks datasets and architectures demonstrates that our method is indeed able to learn accurate privacy-preserving GNNs, while still outperforming standard non-private methods that completely ignore graph information.

Cite

Text

Daigavane et al. "Node-Level Differentially Private Graph Neural Networks." ICLR 2022 Workshops: PAIR2Struct, 2022.

Markdown

[Daigavane et al. "Node-Level Differentially Private Graph Neural Networks." ICLR 2022 Workshops: PAIR2Struct, 2022.](https://mlanthology.org/iclrw/2022/daigavane2022iclrw-nodelevel/)

BibTeX

@inproceedings{daigavane2022iclrw-nodelevel,
  title     = {{Node-Level Differentially Private Graph Neural Networks}},
  author    = {Daigavane, Ameya and Madan, Gagan and Sinha, Aditya and Thakurta, Abhradeep Guha and Aggarwal, Gaurav and Jain, Prateek},
  booktitle = {ICLR 2022 Workshops: PAIR2Struct},
  year      = {2022},
  url       = {https://mlanthology.org/iclrw/2022/daigavane2022iclrw-nodelevel/}
}