Face Recognition: Too Bias, or Not Too Bias?

Abstract

We reveal critical insights into problems of bias in state-of-the-art facial recognition (FR) systems using a novel Balanced Faces In the Wild (BFW) dataset: data balanced for gender and ethnic groups. We show variations in the optimal scoring threshold for face-pairs across different subgroups. Thus, the conventional approach of learning a global threshold for all pairs results in performance gaps between subgroups. By learning subgroup-specific thresholds, we reduce performance gaps, and also show a notable boost in overall performance. Furthermore, we do a human evaluation to measure bias in humans, which supports the hypothesis that an analogous bias exists in human perception. For the BFW database, source code, and more, visit https://github.com/visionjo/facerec-bias-bfw.

Cite

Text

Robinson et al. "Face Recognition: Too Bias, or Not Too Bias?." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020. doi:10.1109/CVPRW50498.2020.00008

Markdown

[Robinson et al. "Face Recognition: Too Bias, or Not Too Bias?." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020.](https://mlanthology.org/cvprw/2020/robinson2020cvprw-face/) doi:10.1109/CVPRW50498.2020.00008

BibTeX

@inproceedings{robinson2020cvprw-face,
  title     = {{Face Recognition: Too Bias, or Not Too Bias?}},
  author    = {Robinson, Joseph P. and Livitz, Gennady and Henon, Yann and Qin, Can and Fu, Yun and Timoner, Samson},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2020},
  pages     = {1-10},
  doi       = {10.1109/CVPRW50498.2020.00008},
  url       = {https://mlanthology.org/cvprw/2020/robinson2020cvprw-face/}
}