How Knowledge Distillation Mitigates the Synthetic Gap in Fair Face Recognition
Abstract
Leveraging the capabilities of Knowledge Distillation (KD) strategies, we devise a strategy to fight the recent retraction of face recognition datasets. Given a pretrained Teacher model trained on a real dataset, we show that carefully utilising synthetic datasets, or a mix between real and synthetic datasets to distil knowledge from this teacher to smaller students can yield surprising results. In this sense, we trained 33 different models with and without KD, on different datasets, with different architectures and losses. And our findings are consistent, using KD leads to performance gains across all ethnicities and decreased bias. In addition, it helps to mitigate the performance gap between real and synthetic datasets. This approach addresses the limitations of synthetic data training, improving both the accuracy and fairness of face recognition models.
Cite
Text
Neto et al. "How Knowledge Distillation Mitigates the Synthetic Gap in Fair Face Recognition." European Conference on Computer Vision Workshops, 2024. doi:10.1007/978-3-031-91907-7_21Markdown
[Neto et al. "How Knowledge Distillation Mitigates the Synthetic Gap in Fair Face Recognition." European Conference on Computer Vision Workshops, 2024.](https://mlanthology.org/eccvw/2024/neto2024eccvw-knowledge/) doi:10.1007/978-3-031-91907-7_21BibTeX
@inproceedings{neto2024eccvw-knowledge,
title = {{How Knowledge Distillation Mitigates the Synthetic Gap in Fair Face Recognition}},
author = {Neto, Pedro C. and Colakovic, Ivona and Karakatic, Saso and Sequeira, Ana Filipa},
booktitle = {European Conference on Computer Vision Workshops},
year = {2024},
pages = {351-367},
doi = {10.1007/978-3-031-91907-7_21},
url = {https://mlanthology.org/eccvw/2024/neto2024eccvw-knowledge/}
}