Fair Representation Learning Through Implicit Path Alignment

Abstract

We consider a fair representation learning perspective, where optimal predictors, on top of the data representation, are ensured to be invariant with respect to different sub-groups. Specifically, we formulate this intuition as a bi-level optimization, where the representation is learned in the outer-loop, and invariant optimal group predictors are updated in the inner-loop. Moreover, the proposed bi-level objective is demonstrated to fulfill the sufficiency rule, which is desirable in various practical scenarios but was not commonly studied in the fair learning. Besides, to avoid the high computational and memory cost of differentiating in the inner-loop of bi-level objective, we propose an implicit path alignment algorithm, which only relies on the solution of inner optimization and the implicit differentiation rather than the exact optimization path. We further analyze the error gap of the implicit approach and empirically validate the proposed method in both classification and regression settings. Experimental results show the consistently better trade-off in prediction performance and fairness measurement.

Cite

Text

Shui et al. "Fair Representation Learning Through Implicit Path Alignment." International Conference on Machine Learning, 2022.

Markdown

[Shui et al. "Fair Representation Learning Through Implicit Path Alignment." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/shui2022icml-fair/)

BibTeX

@inproceedings{shui2022icml-fair,
  title     = {{Fair Representation Learning Through Implicit Path Alignment}},
  author    = {Shui, Changjian and Chen, Qi and Li, Jiaqi and Wang, Boyu and Gagné, Christian},
  booktitle = {International Conference on Machine Learning},
  year      = {2022},
  pages     = {20156-20175},
  volume    = {162},
  url       = {https://mlanthology.org/icml/2022/shui2022icml-fair/}
}