ABCFair: An Adaptable Benchmark Approach for Comparing Fairness Methods

Abstract

Numerous methods have been implemented that pursue fairness with respect to sensitive features by mitigating biases in machine learning. Yet, the problem settings that each method tackles vary significantly, including the stage of intervention, the composition of sensitive features, the fairness notion, and the distribution of the output. Even in binary classification, the greatest common denominator of problem settings is small, significantly complicating benchmarking.Hence, we introduce ABCFair, a benchmark approach which allows adapting to the desiderata of the real-world problem setting, enabling proper comparability between methods for any use case. We apply this benchmark to a range of pre-, in-, and postprocessing methods on both large-scale, traditional datasets and on a dual label (biased and unbiased) dataset to sidestep the fairness-accuracy trade-off.

Cite

Text

Defrance et al. "ABCFair: An Adaptable Benchmark Approach for Comparing Fairness Methods." Neural Information Processing Systems, 2024. doi:10.52202/079017-1268

Markdown

[Defrance et al. "ABCFair: An Adaptable Benchmark Approach for Comparing Fairness Methods." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/defrance2024neurips-abcfair/) doi:10.52202/079017-1268

BibTeX

@inproceedings{defrance2024neurips-abcfair,
  title     = {{ABCFair: An Adaptable Benchmark Approach for Comparing Fairness Methods}},
  author    = {Defrance, MaryBeth and Buyl, Maarten and De Bie, Tijl},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-1268},
  url       = {https://mlanthology.org/neurips/2024/defrance2024neurips-abcfair/}
}