The Impact of Features Used by Algorithms on Perceptions of Fairness
Abstract
Ensuring fairness is essential for ethical decision-making in various domains. Informally, a neural network is considered fair if and only if it treats similar individuals similarly in a given task. We introduce FaVeR (Fairness Verification and Repair), a framework for efficiently verifying and repairing pre-trained neural networks with respect to individual fairness properties. FaVeR ensures fairness via iterative search of high-sensitivity neurons and backward adjustment of their weights, guided by counterexamples generated from fairness verification using satisfiability modulo convex programming. By addressing fairness at the neuron level, FaVeR minimizes the impact of neural network repair on the overall performance. Experimental evaluations on common fairness datasets show that FaVeR achieves a 100% fairness repair rate across all models, with accuracy reduction of less than 2.27%. Moreover, its significantly lower average runtime makes it suitable for practical applications.
Cite
Text
Estornell et al. "The Impact of Features Used by Algorithms on Perceptions of Fairness." International Joint Conference on Artificial Intelligence, 2024. doi:10.24963/ijcai.2024/42Markdown
[Estornell et al. "The Impact of Features Used by Algorithms on Perceptions of Fairness." International Joint Conference on Artificial Intelligence, 2024.](https://mlanthology.org/ijcai/2024/estornell2024ijcai-impact/) doi:10.24963/ijcai.2024/42BibTeX
@inproceedings{estornell2024ijcai-impact,
title = {{The Impact of Features Used by Algorithms on Perceptions of Fairness}},
author = {Estornell, Andrew and Zhang, Tina and Das, Sanmay and Ho, Chien-Ju and Juba, Brendan and Vorobeychik, Yevgeniy},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2024},
pages = {376-384},
doi = {10.24963/ijcai.2024/42},
url = {https://mlanthology.org/ijcai/2024/estornell2024ijcai-impact/}
}