Preventing Disparate Treatment in Sequential Decision Making
Abstract
We study fairness in sequential decision making environments, where at each time step a learning algorithm receives data corresponding to a new individual (e.g. a new job application) and must make an irrevocable decision about him/her (e.g. whether to hire the applicant) based on observations made so far. In order to prevent cases of disparate treatment, our time-dependent notion of fairness requires algorithmic decisions to be consistent: if two individuals are similar in the feature space and arrive during the same time epoch, the algorithm must assign them to similar outcomes. We propose a general framework for post-processing predictions made by a black-box learning model, that guarantees the resulting sequence of outcomes is consistent. We show theoretically that imposing consistency will not significantly slow down learning. Our experiments on two real-world data sets illustrate and confirm this finding in practice.
Cite
Text
Heidari and Krause. "Preventing Disparate Treatment in Sequential Decision Making." International Joint Conference on Artificial Intelligence, 2018. doi:10.24963/IJCAI.2018/311Markdown
[Heidari and Krause. "Preventing Disparate Treatment in Sequential Decision Making." International Joint Conference on Artificial Intelligence, 2018.](https://mlanthology.org/ijcai/2018/heidari2018ijcai-preventing/) doi:10.24963/IJCAI.2018/311BibTeX
@inproceedings{heidari2018ijcai-preventing,
title = {{Preventing Disparate Treatment in Sequential Decision Making}},
author = {Heidari, Hoda and Krause, Andreas},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2018},
pages = {2248-2254},
doi = {10.24963/IJCAI.2018/311},
url = {https://mlanthology.org/ijcai/2018/heidari2018ijcai-preventing/}
}