Federated Progressive Sparsification (Purge-Merge-Tune)+

Abstract

We present FedSparsify, a sparsification strategy for federated training based on progressive weight magnitude pruning, which provides several benefits. First, since the size of the network becomes increasingly smaller, computation and communication costs during training are reduced. Second, the models are incrementally constrained to a smaller set of parameters, which facilitates alignment/merging of the local models, and results in improved learning performance at high sparsity. Third, the final sparsified model is significantly smaller, which improves inference efficiency. We analyze FedSparsify's convergence and empirically demonstrate that FedSparsify can learn a subnetwork smaller than a tenth of the size of the original model with the same or better accuracy compared to existing pruning and no-pruning baselines across several challenging federated learning environments. Our approach leads to an average 4-fold inference efficiency speedup and a 15-fold model size reduction over different domains and neural network architectures.

Cite

Text

Stripelis et al. "Federated Progressive Sparsification (Purge-Merge-Tune)+." NeurIPS 2022 Workshops: Federated_Learning, 2022.

Markdown

[Stripelis et al. "Federated Progressive Sparsification (Purge-Merge-Tune)+." NeurIPS 2022 Workshops: Federated_Learning, 2022.](https://mlanthology.org/neuripsw/2022/stripelis2022neuripsw-federated/)

BibTeX

@inproceedings{stripelis2022neuripsw-federated,
  title     = {{Federated Progressive Sparsification (Purge-Merge-Tune)+}},
  author    = {Stripelis, Dimitris and Gupta, Umang and Steeg, Greg Ver and Ambite, Jose Luis},
  booktitle = {NeurIPS 2022 Workshops: Federated_Learning},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/stripelis2022neuripsw-federated/}
}