Achieving Fairness at No Utility Cost via Data Reweighing with Influence
Abstract
With the fast development of algorithmic governance, fairness has become a compulsory property for machine learning models to suppress unintentional discrimination. In this paper, we focus on the pre-processing aspect for achieving fairness, and propose a data reweighing approach that only adjusts the weight for samples in the training phase. Different from most previous reweighing methods which usually assign a uniform weight for each (sub)group, we granularly model the influence of each training sample with regard to fairness-related quantity and predictive utility, and compute individual weights based on influence under the constraints from both fairness and utility. Experimental results reveal that previous methods achieve fairness at a non-negligible cost of utility, while as a significant advantage, our approach can empirically release the tradeoff and obtain cost-free fairness for equal opportunity. We demonstrate the cost-free fairness through vanilla classifiers and standard training processes, compared to baseline methods on multiple real-world tabular datasets. Code available at https://github.com/brandeis-machine-learning/influence-fairness.
Cite
Text
Li and Liu. "Achieving Fairness at No Utility Cost via Data Reweighing with Influence." International Conference on Machine Learning, 2022.Markdown
[Li and Liu. "Achieving Fairness at No Utility Cost via Data Reweighing with Influence." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/li2022icml-achieving/)BibTeX
@inproceedings{li2022icml-achieving,
title = {{Achieving Fairness at No Utility Cost via Data Reweighing with Influence}},
author = {Li, Peizhao and Liu, Hongfu},
booktitle = {International Conference on Machine Learning},
year = {2022},
pages = {12917-12930},
volume = {162},
url = {https://mlanthology.org/icml/2022/li2022icml-achieving/}
}