Reconstructing Training Data from Model Gradient, Provably

Abstract

Understanding when and how much a model gradient leaks information about the training sample is an important question in privacy. In this paper, we present a surprising result: Even without training or memorizing the data, we can fully reconstruct the training samples from a single gradient query at a randomly chosen parameter value. We prove the identifiability of the training data under mild assumptions: with shallow or deep neural networks and wide range of activation functions. We also present a statistically and computationally efficient algorithm based on tensor decomposition to reconstruct the training data. As a provable attack that reveals sensitive training data, our findings suggest potential severe threats to privacy, especially in federated learning.

Cite

Text

Wang et al. "Reconstructing Training Data from Model Gradient, Provably." Artificial Intelligence and Statistics, 2023.

Markdown

[Wang et al. "Reconstructing Training Data from Model Gradient, Provably." Artificial Intelligence and Statistics, 2023.](https://mlanthology.org/aistats/2023/wang2023aistats-reconstructing/)

BibTeX

@inproceedings{wang2023aistats-reconstructing,
  title     = {{Reconstructing Training Data from Model Gradient, Provably}},
  author    = {Wang, Zihan and Lee, Jason and Lei, Qi},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2023},
  pages     = {6595-6612},
  volume    = {206},
  url       = {https://mlanthology.org/aistats/2023/wang2023aistats-reconstructing/}
}