Scalable and Stable Surrogates for Flexible Classifiers with Fairness Constraints
Abstract
We investigate how fairness relaxations scale to flexible classifiers like deep neural networks for images and text. We analyze an easy-to-use and robust way of imposing fairness constraints when training, and through this framework prove that some prior fairness surrogates exhibit degeneracies for non-convex models. We resolve these problems via three new surrogates: an adaptive data re-weighting, and two smooth upper-bounds that are provably more robust than some previous methods. Our surrogates perform comparably to the state-of-the-art on low-dimensional fairness benchmarks, while achieving superior accuracy and stability for more complex computer vision and natural language processing tasks.
Cite
Text
Bendekgey and Sudderth. "Scalable and Stable Surrogates for Flexible Classifiers with Fairness Constraints." Neural Information Processing Systems, 2021.Markdown
[Bendekgey and Sudderth. "Scalable and Stable Surrogates for Flexible Classifiers with Fairness Constraints." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/bendekgey2021neurips-scalable/)BibTeX
@inproceedings{bendekgey2021neurips-scalable,
title = {{Scalable and Stable Surrogates for Flexible Classifiers with Fairness Constraints}},
author = {Bendekgey, Henry C and Sudderth, Erik B.},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/bendekgey2021neurips-scalable/}
}