Fairness and Bias in Online Selection
Abstract
There is growing awareness and concern about fairness in machine learning and algorithm design. This is particularly true in online selection problems where decisions are often biased, for example, when assessing credit risks or hiring staff. We address the issues of fairness and bias in online selection by introducing multi-color versions of the classic secretary and prophet problem. Interestingly, existing algorithms for these problems are either very unfair or very inefficient, so we develop optimal fair algorithms for these new problems and provide tight bounds on their competitiveness. We validate our theoretical findings on real-world data.
Cite
Text
Correa et al. "Fairness and Bias in Online Selection." International Conference on Machine Learning, 2021.Markdown
[Correa et al. "Fairness and Bias in Online Selection." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/correa2021icml-fairness/)BibTeX
@inproceedings{correa2021icml-fairness,
title = {{Fairness and Bias in Online Selection}},
author = {Correa, Jose and Cristi, Andres and Duetting, Paul and Norouzi-Fard, Ashkan},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {2112-2121},
volume = {139},
url = {https://mlanthology.org/icml/2021/correa2021icml-fairness/}
}