Fairness of AI Systems in the Legal Context
Abstract
The digital age has profoundly reshaped societal interactions, heavily influenced by algorithms and Artificial Intelligence (AI). This evolution introduces new challenges in understanding and addressing discrimination, which now arises from both human biases and algorithmic biases, that may cause discriminatory decisions, leading to a form of algorithmic technocracy. AI systems ensure fairness when they operate without discrimination. Legal frameworks must adapt to these changes, integrating traditional principles with contemporary technological realities. This paper explores the concept of fairness in AI systems, highlighting the need for both regulatory and technical measures to ensure non-discriminatory practices and to evaluate the accountability for discriminatory behaviors. We present and discuss most commonly used standard mathematical measures for demonstrating fairness, and emphasize the requirements a measure must meet to comply with regulatory aspects of fairness. Our investigation highlights the importance of aligning legal and mathematical approaches to achieve fairness and accountability in AI. We advocate for ongoing assessment and adjustment to maintain ethical standards.
Cite
Text
Paternolli et al. "Fairness of AI Systems in the Legal Context." European Conference on Computer Vision Workshops, 2024. doi:10.1007/978-3-031-92648-8_4Markdown
[Paternolli et al. "Fairness of AI Systems in the Legal Context." European Conference on Computer Vision Workshops, 2024.](https://mlanthology.org/eccvw/2024/paternolli2024eccvw-fairness/) doi:10.1007/978-3-031-92648-8_4BibTeX
@inproceedings{paternolli2024eccvw-fairness,
title = {{Fairness of AI Systems in the Legal Context}},
author = {Paternolli, Veronica and Preda, Mila Dalla and Giacobazzi, Roberto},
booktitle = {European Conference on Computer Vision Workshops},
year = {2024},
pages = {53-67},
doi = {10.1007/978-3-031-92648-8_4},
url = {https://mlanthology.org/eccvw/2024/paternolli2024eccvw-fairness/}
}