Federated Learning with Flexible Architectures

Abstract

Traditional federated learning (FL) methods have limited support for clients with varying computational and communication abilities, leading to inefficiencies and potential inaccuracies in model training. This limitation hinders the widespread adoption of FL in diverse and resource-constrained environments, such as those with client devices ranging from powerful servers to mobile devices. To address this need, this paper introduces Federated Learning with Flexible Architectures (FedFA), an FL training algorithm that allows clients to train models of different widths and depths. Each client can select a network architecture suitable for its resources, with shallower and thinner networks requiring fewer computing resources for training. Unlike prior work in this area, FedFA incorporates the layer grafting technique to align clients’ local architectures with the largest network architecture in the FL system during model aggregation. Layer grafting ensures that all client contributions are uniformly integrated into the global model, thereby minimizing the risk of any individual client’s data skewing the model’s parameters disproportionately and introducing security benefits. Moreover, FedFA introduces the scalable aggregation method to manage scale variations in weights among different network architectures. Experimentally, FedFA outperforms previous width and depth flexible aggregation strategies. Specifically, FedFA’s testing accuracy matches (1.00 times) or is up to 1.16 times higher globally for IID settings, 0.98 to 1.13 times locally, and 0.95 times to 1.20 times higher globally for non-IID settings compared to earlier strategies. Furthermore, FedFA demonstrates increased robustness against performance degradation in backdoor attack scenarios compared to earlier strategies. Earlier strategies exhibit more drops in testing accuracy under attacks-for IID data by 1.01 to 2.11 times globally, and for non-IID data by 0.89 to 3.31 times locally, and 1.11 to 1.74 times globally, compared to FedFA.

Cite

Text

Park and Joe-Wong. "Federated Learning with Flexible Architectures." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2024. doi:10.1007/978-3-031-70344-7_9

Markdown

[Park and Joe-Wong. "Federated Learning with Flexible Architectures." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2024.](https://mlanthology.org/ecmlpkdd/2024/park2024ecmlpkdd-federated/) doi:10.1007/978-3-031-70344-7_9

BibTeX

@inproceedings{park2024ecmlpkdd-federated,
  title     = {{Federated Learning with Flexible Architectures}},
  author    = {Park, Jong-Ik and Joe-Wong, Carlee},
  booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
  year      = {2024},
  pages     = {143-161},
  doi       = {10.1007/978-3-031-70344-7_9},
  url       = {https://mlanthology.org/ecmlpkdd/2024/park2024ecmlpkdd-federated/}
}