FedLTN: Federated Learning for Sparse and Personalized Lottery Ticket Networks

Abstract

Federated learning (FL) enables clients to collaboratively train a model, while keeping their local training data decentralized. However, high communication costs, data heterogeneity across clients, and lack of personalization techniques hinder the development of FL. In this paper, we propose FedLTN, a novel approach motivated by the well-known Lottery Ticket Hypothesis to learn sparse and personalized lottery ticket networks (LTNs) for communication-efficient and personalized FL under non-identically and independently distributed (non-IID) data settings. Preserving batch-norm statistics of local clients, postpruning without rewinding, and aggregation of LTNs using server momentum ensures that our approach significantly outperforms existing state-of-the-art solutions. Experiments on CIFAR-10 and TinyImageNet datasets show the efficacy of our approach in learning personalized models while significantly reducing communication costs.

Cite

Text

Mugunthan et al. "FedLTN: Federated Learning for Sparse and Personalized Lottery Ticket Networks." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-19775-8_5

Markdown

[Mugunthan et al. "FedLTN: Federated Learning for Sparse and Personalized Lottery Ticket Networks." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/mugunthan2022eccv-fedltn/) doi:10.1007/978-3-031-19775-8_5

BibTeX

@inproceedings{mugunthan2022eccv-fedltn,
  title     = {{FedLTN: Federated Learning for Sparse and Personalized Lottery Ticket Networks}},
  author    = {Mugunthan, Vaikkunth and Lin, Eric and Gokul, Vignesh and Lau, Christian and Kagal, Lalana and Pieper, Steve},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2022},
  doi       = {10.1007/978-3-031-19775-8_5},
  url       = {https://mlanthology.org/eccv/2022/mugunthan2022eccv-fedltn/}
}