Assessing and Enforcing Fairness in the AI Lifecycle
Abstract
A significant challenge in detecting and mitigating bias is creating a mindset amongst AI developers to address unfairness. The current literature on fairness is broad, and the learning curve to distinguish where to use existing metrics and techniques for bias detection or mitigation is difficult. This survey systematises the state-of-the-art about distinct notions of fairness and relative techniques for bias mitigation according to the AI lifecycle. Gaps and challenges identified during the development of this work are also discussed.
Cite
Text
Calegari et al. "Assessing and Enforcing Fairness in the AI Lifecycle." International Joint Conference on Artificial Intelligence, 2023. doi:10.24963/IJCAI.2023/735Markdown
[Calegari et al. "Assessing and Enforcing Fairness in the AI Lifecycle." International Joint Conference on Artificial Intelligence, 2023.](https://mlanthology.org/ijcai/2023/calegari2023ijcai-assessing/) doi:10.24963/IJCAI.2023/735BibTeX
@inproceedings{calegari2023ijcai-assessing,
title = {{Assessing and Enforcing Fairness in the AI Lifecycle}},
author = {Calegari, Roberta and Castañé, Gabriel G. and Milano, Michela and O'Sullivan, Barry},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2023},
pages = {6554-6562},
doi = {10.24963/IJCAI.2023/735},
url = {https://mlanthology.org/ijcai/2023/calegari2023ijcai-assessing/}
}