Why Does Private Fine-Tuning Resist Differential Privacy Noise? a Representation Learning Perspective

Abstract

In this paper, we investigate the impact of differential privacy (DP) on the fine-tuning of publicly pre-trained models, focusing on Vision Transformers (ViTs). We introduce an approach for analyzing the DP fine-tuning process by leveraging a representation learning law to measure the separability of features across intermediate layers of the model. Through a series of experiments with ViTs pre-trained on ImageNet and fine-tuned on a subset of CIFAR-10, we explore the effects of DP noise on the learned representations. Our results show that, without proper hyperparameter tuning, DP noise can significantly degrade feature quality, particularly in high-privacy regimes. However, when hyperparameters are optimized, the impact of DP noise on the learned representations is limited, leading to high model accuracy even in high-privacy settings. These findings provide insight into how pre-training on public datasets can help mitigate the privacy-utility trade-off in private deep learning applications.

Cite

Text

Zhao et al. "Why Does Private Fine-Tuning Resist Differential Privacy Noise? a Representation Learning Perspective." ICLR 2025 Workshops: Data_Problems, 2025.

Markdown

[Zhao et al. "Why Does Private Fine-Tuning Resist Differential Privacy Noise? a Representation Learning Perspective." ICLR 2025 Workshops: Data_Problems, 2025.](https://mlanthology.org/iclrw/2025/zhao2025iclrw-private/)

BibTeX

@inproceedings{zhao2025iclrw-private,
  title     = {{Why Does Private Fine-Tuning Resist Differential Privacy Noise? a Representation Learning Perspective}},
  author    = {Zhao, Yue and Yutong, Xia and Wang, Chendi},
  booktitle = {ICLR 2025 Workshops: Data_Problems},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/zhao2025iclrw-private/}
}