A Brief Tutorial on Sample Size Calculations for Fairness Audits
Abstract
In fairness audits, a standard objective is to detect whether a given algorithm performs substantially differently between subgroups. Properly powering the statistical analysis of such audits is crucial for obtaining informative fairness assessments, as it ensures a high probability of detecting unfairness when it exists. However, limited guidance is available on the amount of data necessary for a fairness audit, lacking directly applicable results concerning commonly used fairness metrics. Additionally, the consideration of unequal subgroup sample sizes is also missing. In this tutorial, we address these issues by providing guidance on how to determine the required subgroup sample sizes to maximize the statistical power of hypothesis tests for detecting unfairness. Our findings are applicable to audits of binary classification models and multiple fairness metrics derived as summaries of the confusion matrix. Furthermore, we discuss other aspects of audit study designs that can increase the reliability of audit results.
Cite
Text
Singh et al. "A Brief Tutorial on Sample Size Calculations for Fairness Audits." NeurIPS 2023 Workshops: RegML, 2023.Markdown
[Singh et al. "A Brief Tutorial on Sample Size Calculations for Fairness Audits." NeurIPS 2023 Workshops: RegML, 2023.](https://mlanthology.org/neuripsw/2023/singh2023neuripsw-brief/)BibTeX
@inproceedings{singh2023neuripsw-brief,
title = {{A Brief Tutorial on Sample Size Calculations for Fairness Audits}},
author = {Singh, Harvineet and Xia, Fan and Kim, Mi-Ok and Pirracchio, Romain and Chunara, Rumi and Feng, Jean},
booktitle = {NeurIPS 2023 Workshops: RegML},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/singh2023neuripsw-brief/}
}