Privacy Amplification Through Synthetic Data: Insights from Linear Regression

Abstract

Synthetic data inherits the differential privacy guarantees of the model used to generate it. Additionally, synthetic data may benefit from privacy amplification when the generative model is kept hidden. While empirical studies suggest this phenomenon, a rigorous theoretical understanding is still lacking. In this paper, we investigate this question through the well-understood framework of linear regression. First, we establish negative results showing that if an adversary controls the seed of the generative model, a single synthetic data point can leak as much information as releasing the model itself. Conversely, we show that when synthetic data is generated from random inputs, releasing a limited number of synthetic data points amplifies privacy beyond the model’s inherent guarantees. We believe our findings in linear regression can serve as a foundation for deriving more general bounds in the future.

Cite

Text

Pierquin et al. "Privacy Amplification Through Synthetic Data: Insights from Linear Regression." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Pierquin et al. "Privacy Amplification Through Synthetic Data: Insights from Linear Regression." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/pierquin2025icml-privacy/)

BibTeX

@inproceedings{pierquin2025icml-privacy,
  title     = {{Privacy Amplification Through Synthetic Data: Insights from Linear Regression}},
  author    = {Pierquin, Clément and Bellet, Aurélien and Tommasi, Marc and Boussard, Matthieu},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {49329-49354},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/pierquin2025icml-privacy/}
}