Federated Asymptotics: A Model to Compare Federated Learning Algorithms

Abstract

We develop an asymptotic framework to compare the test performance of (personalized) federated learning algorithms whose purpose is to move beyond algorithmic convergence arguments. To that end, we study a high-dimensional linear regression model to elucidate the statistical properties (per client test error) of loss minimizers. Our techniques and model allow precise predictions about the benefits of personalization and information sharing in federated scenarios, including that Federated Averaging with simple client fine-tuning achieves identical asymptotic risk to more intricate meta-learning approaches and outperforms naive Federated Averaging. We evaluate and corroborate these theoretical predictions on federated versions of the EMNIST, CIFAR-100, Shakespeare, and Stack Overflow datasets.

Cite

Text

Cheng et al. "Federated Asymptotics: A Model to Compare Federated Learning Algorithms." Artificial Intelligence and Statistics, 2023.

Markdown

[Cheng et al. "Federated Asymptotics: A Model to Compare Federated Learning Algorithms." Artificial Intelligence and Statistics, 2023.](https://mlanthology.org/aistats/2023/cheng2023aistats-federated/)

BibTeX

@inproceedings{cheng2023aistats-federated,
  title     = {{Federated Asymptotics: A Model to Compare Federated Learning Algorithms}},
  author    = {Cheng, Gary and Chadha, Karan and Duchi, John},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2023},
  pages     = {10650-10689},
  volume    = {206},
  url       = {https://mlanthology.org/aistats/2023/cheng2023aistats-federated/}
}