Accelerating Federated Learning with Quick Distributed Mean Estimation
Abstract
Distributed Mean Estimation (DME), in which $n$ clients communicate vectors to a parameter server that estimates their average, is a fundamental building block in communication-efficient federated learning. In this paper, we improve on previous DME techniques that achieve the optimal $O(1/n)$ Normalized Mean Squared Error (NMSE) guarantee by asymptotically improving the complexity for either encoding or decoding (or both). To achieve this, we formalize the problem in a novel way that allows us to use off-the-shelf mathematical solvers to design the quantization. Using various datasets and training tasks, we demonstrate how QUIC-FL achieves state of the art accuracy with faster encoding and decoding times compared to other DME methods.
Cite
Text
Ben-Basat et al. "Accelerating Federated Learning with Quick Distributed Mean Estimation." International Conference on Machine Learning, 2024.Markdown
[Ben-Basat et al. "Accelerating Federated Learning with Quick Distributed Mean Estimation." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/benbasat2024icml-accelerating/)BibTeX
@inproceedings{benbasat2024icml-accelerating,
title = {{Accelerating Federated Learning with Quick Distributed Mean Estimation}},
author = {Ben-Basat, Ran and Vargaftik, Shay and Portnoy, Amit and Einziger, Gil and Ben-Itzhak, Yaniv and Mitzenmacher, Michael},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {3410-3442},
volume = {235},
url = {https://mlanthology.org/icml/2024/benbasat2024icml-accelerating/}
}