Assessing Performance and Fairness Metrics in Face Recognition - Bootstrap Methods
Abstract
The ROC curve is the major tool for assessing not only the performance but also the fairness properties of a similarity scoring function in Face Recognition. In order to draw reliable conclusions based on empirical ROC analysis, evaluating accurately the uncertainty related to statistical versions of the ROC curves of interest is necessary. For this purpose, we explain in this paper that, because the True/False Acceptance Rates are of the form of U-statistics in the case of similarity scoring, the naive bootstrap approach is not valid here and that a dedicated recentering technique must be used instead. This is illustrated on real data of face images, when applied to several ROC-based metrics such as popular fairness metrics.
Cite
Text
Conti and Clémençon. "Assessing Performance and Fairness Metrics in Face Recognition - Bootstrap Methods." NeurIPS 2022 Workshops: TSRML, 2022.Markdown
[Conti and Clémençon. "Assessing Performance and Fairness Metrics in Face Recognition - Bootstrap Methods." NeurIPS 2022 Workshops: TSRML, 2022.](https://mlanthology.org/neuripsw/2022/conti2022neuripsw-assessing/)BibTeX
@inproceedings{conti2022neuripsw-assessing,
title = {{Assessing Performance and Fairness Metrics in Face Recognition - Bootstrap Methods}},
author = {Conti, Jean-Rémy and Clémençon, Stephan},
booktitle = {NeurIPS 2022 Workshops: TSRML},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/conti2022neuripsw-assessing/}
}