On the Convergence of Hierarchical Federated Learning with Partial Worker Participation

Abstract

Hierarchical federated learning (HFL) has emerged as the architecture of choice for multi-level communication networks, mainly because of its data privacy protection and low communication cost. However, existing studies on the convergence analysis for HFL are limited to the assumptions of full worker participation and/or i.i.d. datasets across workers, both of which rarely hold in practice. Motivated by this, we in this work propose a unified convergence analysis framework for HFL covering both full and partial worker participation with non-i.i.d. data, non-convex objective function and stochastic gradient. We correspondingly develop a three-sided learning rates algorithm to mitigate data divergences issue, thereby realizing better convergence performance. Our theoretical results provide key insights of why partial participation of HFL is beneficial in significantly reducing the data divergences compared to standard FL. Besides, the convergence analysis allows certain individualization for each cluster in HFL indicating that adjusting the worker sampling ratio and round period can improve the convergence behavior.

Cite

Text

Jiang and Zhu. "On the Convergence of Hierarchical Federated Learning with Partial Worker Participation." Uncertainty in Artificial Intelligence, 2024.

Markdown

[Jiang and Zhu. "On the Convergence of Hierarchical Federated Learning with Partial Worker Participation." Uncertainty in Artificial Intelligence, 2024.](https://mlanthology.org/uai/2024/jiang2024uai-convergence/)

BibTeX

@inproceedings{jiang2024uai-convergence,
  title     = {{On the Convergence of Hierarchical Federated Learning with Partial Worker Participation}},
  author    = {Jiang, Xiaohan and Zhu, Hongbin},
  booktitle = {Uncertainty in Artificial Intelligence},
  year      = {2024},
  pages     = {1797-1824},
  volume    = {244},
  url       = {https://mlanthology.org/uai/2024/jiang2024uai-convergence/}
}