Can We Trust Fair-AI?

Abstract

There is a fast-growing literature in addressing the fairness of AI models (fair-AI), with a continuous stream of new conceptual frameworks, methods, and tools. How much can we trust them? How much do they actually impact society? We take a critical focus on fair-AI and survey issues, simplifications, and mistakes that researchers and practitioners often underestimate, which in turn can undermine the trust on fair-AI and limit its contribution to society. In particular, we discuss the hyper-focus on fairness metrics and on optimizing their average performances. We instantiate this observation by discussing the Yule's effect of fair-AI tools: being fair on average does not imply being fair in contexts that matter. We conclude that the use of fair-AI methods should be complemented with the design, development, and verification practices that are commonly summarized under the umbrella of trustworthy AI.

Cite

Text

Ruggieri et al. "Can We Trust Fair-AI?." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I13.26798

Markdown

[Ruggieri et al. "Can We Trust Fair-AI?." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/ruggieri2023aaai-we/) doi:10.1609/AAAI.V37I13.26798

BibTeX

@inproceedings{ruggieri2023aaai-we,
  title     = {{Can We Trust Fair-AI?}},
  author    = {Ruggieri, Salvatore and Álvarez, José M. and Pugnana, Andrea and State, Laura and Turini, Franco},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {15421-15430},
  doi       = {10.1609/AAAI.V37I13.26798},
  url       = {https://mlanthology.org/aaai/2023/ruggieri2023aaai-we/}
}