Efficient and Light-Weight Federated Learning via Asynchronous Distributed Dropout

Abstract

We focus on dropout techniques for asynchronous distributed computations in federated learning (FL) scenarios. We propose \texttt{AsyncDrop}, a novel asynchronous FL framework with smart (i.e., informed/structured) dropout that achieves better performance compared to state of the art asynchronous methodologies, while resulting in less communication and training time costs. The key idea revolves around sub-models out of the global model, that take into account the device heterogeneity. We conjecture that such an approach can be theoretically justified. We implement our approach and compare it against other asynchronous baseline methods, by adapting current synchronous FL algorithms to asynchronous scenarios. Empirically, \texttt{AsyncDrop} significantly reduces the communication cost and training time, while improving the final test accuracy in non-i.i.d. scenarios.

Cite

Text

Dun et al. "Efficient and Light-Weight Federated Learning via Asynchronous Distributed Dropout." NeurIPS 2022 Workshops: Federated_Learning, 2022.

Markdown

[Dun et al. "Efficient and Light-Weight Federated Learning via Asynchronous Distributed Dropout." NeurIPS 2022 Workshops: Federated_Learning, 2022.](https://mlanthology.org/neuripsw/2022/dun2022neuripsw-efficient/)

BibTeX

@inproceedings{dun2022neuripsw-efficient,
  title     = {{Efficient and Light-Weight Federated Learning via Asynchronous Distributed Dropout}},
  author    = {Dun, Chen and Garcia, Mirian Del Carmen Hipolito and Jermaine, Christopher and Dimitriadis, Dimitrios and Kyrillidis, Anastasios},
  booktitle = {NeurIPS 2022 Workshops: Federated_Learning},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/dun2022neuripsw-efficient/}
}