Generalized Disparate Impact for Configurable Fairness Solutions in ML
Abstract
We make two contributions in the field of AI fairness over continuous protected attributes. First, we show that the Hirschfeld-Gebelein-Renyi (HGR) indicator (the only one currently available for such a case) is valuable but subject to a few crucial limitations regarding semantics, interpretability, and robustness. Second, we introduce a family of indicators that are: 1) complementary to HGR in terms of semantics; 2) fully interpretable and transparent; 3) robust over finite samples; 4) configurable to suit specific applications. Our approach also allows us to define fine-grained constraints to permit certain types of dependence and forbid others selectively. By expanding the available options for continuous protected attributes, our approach represents a significant contribution to the area of fair artificial intelligence.
Cite
Text
Giuliani et al. "Generalized Disparate Impact for Configurable Fairness Solutions in ML." International Conference on Machine Learning, 2023.Markdown
[Giuliani et al. "Generalized Disparate Impact for Configurable Fairness Solutions in ML." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/giuliani2023icml-generalized/)BibTeX
@inproceedings{giuliani2023icml-generalized,
title = {{Generalized Disparate Impact for Configurable Fairness Solutions in ML}},
author = {Giuliani, Luca and Misino, Eleonora and Lombardi, Michele},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {11443-11458},
volume = {202},
url = {https://mlanthology.org/icml/2023/giuliani2023icml-generalized/}
}