Differential Privacy Has Bounded Impact on Fairness in Classification

Abstract

We theoretically study the impact of differential privacy on fairness in classification. We prove that, given a class of models, popular group fairness measures are pointwise Lipschitz-continuous with respect to the parameters of the model. This result is a consequence of a more general statement on accuracy conditioned on an arbitrary event (such as membership to a sensitive group), which may be of independent interest. We use this Lipschitz property to prove a non-asymptotic bound showing that, as the number of samples increases, the fairness level of private models gets closer to the one of their non-private counterparts. This bound also highlights the importance of the confidence margin of a model on the disparate impact of differential privacy.

Cite

Text

Mangold et al. "Differential Privacy Has Bounded Impact on Fairness in Classification." International Conference on Machine Learning, 2023.

Markdown

[Mangold et al. "Differential Privacy Has Bounded Impact on Fairness in Classification." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/mangold2023icml-differential/)

BibTeX

@inproceedings{mangold2023icml-differential,
  title     = {{Differential Privacy Has Bounded Impact on Fairness in Classification}},
  author    = {Mangold, Paul and Perrot, Michaël and Bellet, Aurélien and Tommasi, Marc},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {23681-23705},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/mangold2023icml-differential/}
}