Private Graphon Estimation for Sparse Graphs

Abstract

We design algorithms for fitting a high-dimensional statistical model to a large, sparse network without revealing sensitive information of individual members. Given a sparse input graph $G$, our algorithms output a node-differentially private nonparametric block model approximation. By node-differentially private, we mean that our output hides the insertion or removal of a vertex and all its adjacent edges. If $G$ is an instance of the network obtained from a generative nonparametric model defined in terms of a graphon $W$, our model guarantees consistency: as the number of vertices tends to infinity, the output of our algorithm converges to $W$ in an appropriate version of the $L_2$ norm. In particular, this means we can estimate the sizes of all multi-way cuts in $G$. Our results hold as long as $W$ is bounded, the average degree of $G$ grows at least like the log of the number of vertices, and the number of blocks goes to infinity at an appropriate rate. We give explicit error bounds in terms of the parameters of the model; in several settings, our bounds improve on or match known nonprivate results.

Cite

Text

Borgs et al. "Private Graphon Estimation for Sparse Graphs." Neural Information Processing Systems, 2015.

Markdown

[Borgs et al. "Private Graphon Estimation for Sparse Graphs." Neural Information Processing Systems, 2015.](https://mlanthology.org/neurips/2015/borgs2015neurips-private/)

BibTeX

@inproceedings{borgs2015neurips-private,
  title     = {{Private Graphon Estimation for Sparse Graphs}},
  author    = {Borgs, Christian and Chayes, Jennifer and Smith, Adam},
  booktitle = {Neural Information Processing Systems},
  year      = {2015},
  pages     = {1369-1377},
  url       = {https://mlanthology.org/neurips/2015/borgs2015neurips-private/}
}