The Use of Non-Epistemic Values to Account for Bias in Automated Decision Making

Abstract

We consider the algorithmic shortlist problem of how to rank a list of choices for a decision. As the choices on a ballot are as important as the votes themselves, the decisions of who to hire, who to insure, or who to admit, are directly dependent to who is considered, who is categorized, or who meets the threshold for admittance. We frame this problem as one requiring additional non-epistemic context that we use to normalize expected values, and propose a computational model for this context based on a social-psychological model of affect in social interactions.

Cite

Text

Hoey et al. "The Use of Non-Epistemic Values to Account for Bias in Automated Decision Making." NeurIPS 2022 Workshops: MLSW, 2022.

Markdown

[Hoey et al. "The Use of Non-Epistemic Values to Account for Bias in Automated Decision Making." NeurIPS 2022 Workshops: MLSW, 2022.](https://mlanthology.org/neuripsw/2022/hoey2022neuripsw-use/)

BibTeX

@inproceedings{hoey2022neuripsw-use,
  title     = {{The Use of Non-Epistemic Values to Account for Bias in Automated Decision Making}},
  author    = {Hoey, Jesse and Chan, Gabrielle and Doucet, Mathieu and Risi, Christopher and Zhang, Freya},
  booktitle = {NeurIPS 2022 Workshops: MLSW},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/hoey2022neuripsw-use/}
}