Does Machine Bring in Extra Bias in Learning? Approximating Discrimination Within Models Quickly

Abstract

Discrimination mitigation within machine learning (ML) models is complicated because multiple factors may interweave with each other including hierarchically and historically. Yet few existing fairness measures can capture the discrimination level within ML models when dealing with multiple sensitive attributes. To bridge this gap, we propose a fairness measure based on distances between sets from a manifold perspective, named ‘harmonic fairness measure via manifolds (HFM)’ with three optional versions, which can deal with a fine-grained discrimination evaluation for several sensitive attributes of binary/multiple values. To accelerate the computation of distances of sets, we further propose approximation algorithms for efficient bias evaluation. The empirical results demonstrate that our proposed fairness measure HFM is valid and the approximation algorithms are effective and efficient.

Cite

Text

Bian et al. "Does Machine Bring in Extra Bias in Learning? Approximating Discrimination Within Models Quickly." NeurIPS 2024 Workshops: M3L, 2024.

Markdown

[Bian et al. "Does Machine Bring in Extra Bias in Learning? Approximating Discrimination Within Models Quickly." NeurIPS 2024 Workshops: M3L, 2024.](https://mlanthology.org/neuripsw/2024/bian2024neuripsw-machine/)

BibTeX

@inproceedings{bian2024neuripsw-machine,
  title     = {{Does Machine Bring in Extra Bias in Learning? Approximating Discrimination Within Models Quickly}},
  author    = {Bian, Yijun and Luo, Yujie and Xu, Ping},
  booktitle = {NeurIPS 2024 Workshops: M3L},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/bian2024neuripsw-machine/}
}