Debiasing Model Updates for Improving Personalized Federated Training

Abstract

We propose a novel method for federated learning that is customized specifically to the objective of a given edge device. In our proposed method, a server trains a global meta-model by collaborating with devices without actually sharing data. The trained global meta-model is then personalized locally by each device to meet its specific objective. Different from the conventional federated learning setting, training customized models for each device is hindered by both the inherent data biases of the various devices, as well as the requirements imposed by the federated architecture. We propose gradient correction methods leveraging prior works, and explicitly de-bias the meta-model in the distributed heterogeneous data setting to learn personalized device models. We present convergence guarantees of our method for strongly convex, convex and nonconvex meta objectives. We empirically evaluate the performance of our method on benchmark datasets and demonstrate significant communication savings.

Cite

Text

Acar et al. "Debiasing Model Updates for Improving Personalized Federated Training." International Conference on Machine Learning, 2021.

Markdown

[Acar et al. "Debiasing Model Updates for Improving Personalized Federated Training." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/acar2021icml-debiasing/)

BibTeX

@inproceedings{acar2021icml-debiasing,
  title     = {{Debiasing Model Updates for Improving Personalized Federated Training}},
  author    = {Acar, Durmus Alp Emre and Zhao, Yue and Zhu, Ruizhao and Matas, Ramon and Mattina, Matthew and Whatmough, Paul and Saligrama, Venkatesh},
  booktitle = {International Conference on Machine Learning},
  year      = {2021},
  pages     = {21-31},
  volume    = {139},
  url       = {https://mlanthology.org/icml/2021/acar2021icml-debiasing/}
}