Federated Optimization for Heterogeneous Networks

Abstract

Federated learning involves training and effectively combining machine learning models from distributed partitions of data (i.e., tasks) on edge devices, and be naturally viewed as a multi- task learning problem. While Federated Averaging (FedAvg) is the leading optimization method for training non-convex models in this setting, its behavior is not well understood in realistic federated settings when the devices/tasks are statistically heterogeneous, i.e., where each device collects data in a non-identical fashion. In this work, we introduce a framework, called FedProx, to tackle statistical heterogeneity. FedProx encompasses FedAvg as a special case. We provide convergence guarantees for FedProx through a device dissimilarity assumption. Our empirical evaluation validates our theoretical analysis and demonstrates the improved robustness and stability of FedProx for learning in heterogeneous networks.

Cite

Text

Li et al. "Federated Optimization for Heterogeneous Networks." ICML 2019 Workshops: AMTL, 2019.

Markdown

[Li et al. "Federated Optimization for Heterogeneous Networks." ICML 2019 Workshops: AMTL, 2019.](https://mlanthology.org/icmlw/2019/li2019icmlw-federated/)

BibTeX

@inproceedings{li2019icmlw-federated,
  title     = {{Federated Optimization for Heterogeneous Networks}},
  author    = {Li, Tian and Sahu, Anit Kumar and Zaheer, Manzil and Sanjabi, Maziar and Talwalkar, Ameet and Smith, Virginia},
  booktitle = {ICML 2019 Workshops: AMTL},
  year      = {2019},
  url       = {https://mlanthology.org/icmlw/2019/li2019icmlw-federated/}
}