Federated Learning for Face Recognition with Gradient Correction

Abstract

With increasing appealing to privacy issues in face recognition, federated learning has emerged as one of the most prevalent approaches to study the unconstrained face recognition problem with private decentralized data. However, conventional decentralized federated algorithm sharing whole parameters of networks among clients suffers from privacy leakage in face recognition scene. In this work, we introduce a framework, FedGC, to tackle federated learning for face recognition and guarantees higher privacy. We explore a novel idea of correcting gradients from the perspective of backward propagation and propose a softmax-based regularizer to correct gradients of class embeddings by precisely injecting a cross-client gradient term. Theoretically, we show that FedGC constitutes a valid loss function similar to standard softmax. Extensive experiments have been conducted to validate the superiority of FedGC which can match the performance of conventional centralized methods utilizing full training dataset on several popular benchmark datasets.

Cite

Text

Niu and Deng. "Federated Learning for Face Recognition with Gradient Correction." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I2.20095

Markdown

[Niu and Deng. "Federated Learning for Face Recognition with Gradient Correction." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/niu2022aaai-federated/) doi:10.1609/AAAI.V36I2.20095

BibTeX

@inproceedings{niu2022aaai-federated,
  title     = {{Federated Learning for Face Recognition with Gradient Correction}},
  author    = {Niu, Yifan and Deng, Weihong},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {1999-2007},
  doi       = {10.1609/AAAI.V36I2.20095},
  url       = {https://mlanthology.org/aaai/2022/niu2022aaai-federated/}
}