Online Learning with an Unknown Fairness Metric
Abstract
We consider the problem of online learning in the linear contextual bandits setting, but in which there are also strong individual fairness constraints governed by an unknown similarity metric. These constraints demand that we select similar actions or individuals with approximately equal probability DHPRZ12, which may be at odds with optimizing reward, thus modeling settings where profit and social policy are in tension. We assume we learn about an unknown Mahalanobis similarity metric from only weak feedback that identifies fairness violations, but does not quantify their extent. This is intended to represent the interventions of a regulator who "knows unfairness when he sees it" but nevertheless cannot enunciate a quantitative fairness metric over individuals. Our main result is an algorithm in the adversarial context setting that has a number of fairness violations that depends only logarithmically on T, while obtaining an optimal O(sqrt(T)) regret bound to the best fair policy.
Cite
Text
Gillen et al. "Online Learning with an Unknown Fairness Metric." Neural Information Processing Systems, 2018.Markdown
[Gillen et al. "Online Learning with an Unknown Fairness Metric." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/gillen2018neurips-online/)BibTeX
@inproceedings{gillen2018neurips-online,
title = {{Online Learning with an Unknown Fairness Metric}},
author = {Gillen, Stephen and Jung, Christopher and Kearns, Michael and Roth, Aaron},
booktitle = {Neural Information Processing Systems},
year = {2018},
pages = {2600-2609},
url = {https://mlanthology.org/neurips/2018/gillen2018neurips-online/}
}