SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles

Abstract

A critical concern in data-driven processes is to build models whose outcomes do not discriminate against some protected groups. In learning tasks, knowledge of the group attributes is essential to ensure non-discrimination, but in practice, these attributes may not be available due to legal and ethical requirements. To address this challenge, this paper studies a model that protects the privacy of individuals’ sensitive information while also allowing it to learn non-discriminatory predictors. A key feature of the proposed model is to enable the use of off-the-shelves and non-private fair models to create a privacy-preserving and fair model. The paper analyzes the relation between accuracy, privacy, and fairness, and assesses the benefits of the proposed models on several prediction tasks. In particular, this proposal allows both scalable and accurate training of private and fair models for very large neural networks.

Cite

Text

Tran et al. "SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles." International Joint Conference on Artificial Intelligence, 2023. doi:10.24963/IJCAI.2023/56

Markdown

[Tran et al. "SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles." International Joint Conference on Artificial Intelligence, 2023.](https://mlanthology.org/ijcai/2023/tran2023ijcai-sf/) doi:10.24963/IJCAI.2023/56

BibTeX

@inproceedings{tran2023ijcai-sf,
  title     = {{SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles}},
  author    = {Tran, Cuong and Zhu, Keyu and Fioretto, Ferdinando and Van Hentenryck, Pascal},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {501-509},
  doi       = {10.24963/IJCAI.2023/56},
  url       = {https://mlanthology.org/ijcai/2023/tran2023ijcai-sf/}
}