Fairness-Aware Estimation of Graphical Models

Abstract

This paper examines the issue of fairness in the estimation of graphical models (GMs), particularly Gaussian, Covariance, and Ising models. These models play a vital role in understanding complex relationships in high-dimensional data. However, standard GMs can result in biased outcomes, especially when the underlying data involves sensitive characteristics or protected groups. To address this, we introduce a comprehensive framework designed to reduce bias in the estimation of GMs related to protected attributes. Our approach involves the integration of the pairwise graph disparity error and a tailored loss function into a nonsmooth multi-objective optimization problem, striving to achieve fairness across different sensitive groups while maintaining the effectiveness of the GMs. Experimental evaluations on synthetic and real-world datasets demonstrate that our framework effectively mitigates bias without undermining GMs' performance.

Cite

Text

Zhou et al. "Fairness-Aware Estimation of Graphical Models." Neural Information Processing Systems, 2024. doi:10.52202/079017-0568

Markdown

[Zhou et al. "Fairness-Aware Estimation of Graphical Models." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/zhou2024neurips-fairnessaware/) doi:10.52202/079017-0568

BibTeX

@inproceedings{zhou2024neurips-fairnessaware,
  title     = {{Fairness-Aware Estimation of Graphical Models}},
  author    = {Zhou, Zhuoping and Tarzanagh, Davoud Ataee and Hou, Bojian and Long, Qi and Shen, Li},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-0568},
  url       = {https://mlanthology.org/neurips/2024/zhou2024neurips-fairnessaware/}
}