Communication-Efficient and Scalable Decentralized Federated Edge Learning

Abstract

Federated Edge Learning (FEL) is a distributed Machine Learning (ML) framework for collaborative training on edge devices. FEL improves data privacy over traditional centralized ML model training by keeping data on the devices and only sending local model updates to a central coordinator for aggregation. However, challenges still remain in existing FEL architectures where there is high communication overhead between edge devices and the coordinator. In this paper, we present a working prototype of blockchain-empowered and communication-efficient FEL framework, which enhances the security and scalability towards large-scale implementation of FEL.

Cite

Text

Yapp et al. "Communication-Efficient and Scalable Decentralized Federated Edge Learning." International Joint Conference on Artificial Intelligence, 2021. doi:10.24963/IJCAI.2021/720

Markdown

[Yapp et al. "Communication-Efficient and Scalable Decentralized Federated Edge Learning." International Joint Conference on Artificial Intelligence, 2021.](https://mlanthology.org/ijcai/2021/yapp2021ijcai-communication/) doi:10.24963/IJCAI.2021/720

BibTeX

@inproceedings{yapp2021ijcai-communication,
  title     = {{Communication-Efficient and Scalable Decentralized Federated Edge Learning}},
  author    = {Yapp, Austine Zong Han and Koh, Hong Soo Nicholas and Lai, Yan Ting and Kang, Jiawen and Li, Xuandi and Ng, Jer Shyuan and Jiang, Hongchao and Lim, Wei Yang Bryan and Xiong, Zehui and Niyato, Dusit},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {5032-5035},
  doi       = {10.24963/IJCAI.2021/720},
  url       = {https://mlanthology.org/ijcai/2021/yapp2021ijcai-communication/}
}