Influence Based Approaches to Algorithmic Fairness: A Closer Look
Abstract
Off-the-shelf pre-trained models are increasingly common in machine learning. When deployed in the real world, it is essential that such models are not just accurate but also demonstrate qualities like fairness. This paper takes a closer look at recently proposed approaches that edit a pre-trained model for group fairness by re-weighting the training data. We offer perspectives that unify disparate weighting schemes from past studies and pave the way for new weighting strategies to address group fairness concerns.
Cite
Text
Ghosh et al. "Influence Based Approaches to Algorithmic Fairness: A Closer Look." NeurIPS 2023 Workshops: XAIA, 2023.Markdown
[Ghosh et al. "Influence Based Approaches to Algorithmic Fairness: A Closer Look." NeurIPS 2023 Workshops: XAIA, 2023.](https://mlanthology.org/neuripsw/2023/ghosh2023neuripsw-influence/)BibTeX
@inproceedings{ghosh2023neuripsw-influence,
title = {{Influence Based Approaches to Algorithmic Fairness: A Closer Look}},
author = {Ghosh, Soumya and Sattigeri, Prasanna and Padhi, Inkit and Nagireddy, Manish and Chen, Jie},
booktitle = {NeurIPS 2023 Workshops: XAIA},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/ghosh2023neuripsw-influence/}
}