On the Within-Group Fairness of Screening Classifiers

Abstract

Screening classifiers are increasingly used to identify qualified candidates in a variety of selection processes. In this context, it has been recently shown that if a classifier is calibrated, one can identify the smallest set of candidates which contains, in expectation, a desired number of qualified candidates using a threshold decision rule. This lends support to focusing on calibration as the only requirement for screening classifiers. In this paper, we argue that screening policies that use calibrated classifiers may suffer from an understudied type of within-group unfairness—they may unfairly treat qualified members within demographic groups of interest. Further, we argue that this type of unfairness can be avoided if classifiers satisfy within-group monotonicity, a natural monotonicity property within each group. Then, we introduce an efficient post-processing algorithm based on dynamic programming to minimally modify a given calibrated classifier so that its probability estimates satisfy within-group monotonicity. We validate our algorithm using US Census survey data and show that within-group monotonicity can often be achieved at a small cost in terms of prediction granularity and shortlist size.

Cite

Text

Okati et al. "On the Within-Group Fairness of Screening Classifiers." International Conference on Machine Learning, 2023.

Markdown

[Okati et al. "On the Within-Group Fairness of Screening Classifiers." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/okati2023icml-withingroup/)

BibTeX

@inproceedings{okati2023icml-withingroup,
  title     = {{On the Within-Group Fairness of Screening Classifiers}},
  author    = {Okati, Nastaran and Tsirtsis, Stratis and Gomez Rodriguez, Manuel},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {26495-26516},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/okati2023icml-withingroup/}
}