Confidential-PROFITT: Confidential PROof of FaIr Training of Trees

Abstract

Post hoc auditing of model fairness suffers from potential drawbacks: (1) auditing may be highly sensitive to the test samples chosen; (2) the model and/or its training data may need to be shared with an auditor thereby breaking confidentiality. We address these issues by instead providing a certificate that demonstrates that the learning algorithm itself is fair, and hence, as a consequence, so too is the trained model. We introduce a method to provide a confidential proof of fairness for training, in the context of widely used decision trees, which we term Confidential-PROFITT. We propose novel fair decision tree learning algorithms along with customized zero-knowledge proof protocols to obtain a proof of fairness that can be audited by a third party. Using zero-knowledge proofs enables us to guarantee confidentiality of both the model and its training data. We show empirically that bounding the information gain of each node with respect to the sensitive attributes reduces the unfairness of the final tree. In extensive experiments on the COMPAS, Communities and Crime, Default Credit, and Adult datasets, we demonstrate that a company can use Confidential-PROFITT to certify the fairness of their decision tree to an auditor in less than 2 minutes, thus indicating the applicability of our approach. This is true for both the demographic parity and equalized odds definitions of fairness. Finally, we extend Confidential-PROFITT to apply to ensembles of trees.

Cite

Text

Shamsabadi et al. "Confidential-PROFITT: Confidential PROof of FaIr Training of Trees." International Conference on Learning Representations, 2023.

Markdown

[Shamsabadi et al. "Confidential-PROFITT: Confidential PROof of FaIr Training of Trees." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/shamsabadi2023iclr-confidentialprofitt/)

BibTeX

@inproceedings{shamsabadi2023iclr-confidentialprofitt,
  title     = {{Confidential-PROFITT: Confidential PROof of FaIr Training of Trees}},
  author    = {Shamsabadi, Ali Shahin and Wyllie, Sierra Calanda and Franzese, Nicholas and Dullerud, Natalie and Gambs, Sébastien and Papernot, Nicolas and Wang, Xiao and Weller, Adrian},
  booktitle = {International Conference on Learning Representations},
  year      = {2023},
  url       = {https://mlanthology.org/iclr/2023/shamsabadi2023iclr-confidentialprofitt/}
}