SPEAR: Exact Gradient Inversion of Batches in Federated Learning
Abstract
Federated learning is a framework for collaborative machine learning where clients only share gradient updates and not their private data with a server. However, it was recently shown that gradient inversion attacks can reconstruct this data from the shared gradients. In the important honest-but-curious setting, existing attacks enable exact reconstruction only for batch size of $b=1$, with larger batches permitting only approximate reconstruction. In this work, we propose SPEAR, *the first algorithm reconstructing whole batches with $b >1$ exactly*. SPEAR combines insights into the explicit low-rank structure of gradients with a sampling-based algorithm. Crucially, we leverage ReLU-induced gradient sparsity to precisely filter out large numbers of incorrect samples, making a final reconstruction step tractable. We provide an efficient GPU implementation for fully connected networks and show that it recovers high-dimensional ImageNet inputs in batches of up to $b \lesssim 25$ exactly while scaling to large networks. Finally, we show theoretically that much larger batches can be reconstructed with high probability given exponential time.
Cite
Text
Dimitrov et al. "SPEAR: Exact Gradient Inversion of Batches in Federated Learning." Neural Information Processing Systems, 2024. doi:10.52202/079017-3390Markdown
[Dimitrov et al. "SPEAR: Exact Gradient Inversion of Batches in Federated Learning." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/dimitrov2024neurips-spear/) doi:10.52202/079017-3390BibTeX
@inproceedings{dimitrov2024neurips-spear,
title = {{SPEAR: Exact Gradient Inversion of Batches in Federated Learning}},
author = {Dimitrov, Dimitar I. and Baader, Maximilian and Müller, Mark Niklas and Vechev, Martin},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-3390},
url = {https://mlanthology.org/neurips/2024/dimitrov2024neurips-spear/}
}