FedPara: Low-Rank Hadamard Product for Communication-Efficient Federated Learning
Abstract
In this work, we propose a communication-efficient parameterization, $\texttt{FedPara}$, for federated learning (FL) to overcome the burdens on frequent model uploads and downloads. Our method re-parameterizes weight parameters of layers using low-rank weights followed by the Hadamard product. Compared to the conventional low-rank parameterization, our $\texttt{FedPara}$ method is not restricted to low-rank constraints, and thereby it has a far larger capacity. This property enables to achieve comparable performance while requiring 3 to 10 times lower communication costs than the model with the original layers, which is not achievable by the traditional low-rank methods. The efficiency of our method can be further improved by combining with other efficient FL optimizers. In addition, we extend our method to a personalized FL application, $\texttt{pFedPara}$, which separates parameters into global and local ones. We show that $\texttt{pFedPara}$ outperforms competing personalized FL methods with more than three times fewer parameters.
Cite
Text
Hyeon-Woo et al. "FedPara: Low-Rank Hadamard Product for Communication-Efficient Federated Learning." International Conference on Learning Representations, 2022.Markdown
[Hyeon-Woo et al. "FedPara: Low-Rank Hadamard Product for Communication-Efficient Federated Learning." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/hyeonwoo2022iclr-fedpara/)BibTeX
@inproceedings{hyeonwoo2022iclr-fedpara,
title = {{FedPara: Low-Rank Hadamard Product for Communication-Efficient Federated Learning}},
author = {Hyeon-Woo, Nam and Ye-Bin, Moon and Oh, Tae-Hyun},
booktitle = {International Conference on Learning Representations},
year = {2022},
url = {https://mlanthology.org/iclr/2022/hyeonwoo2022iclr-fedpara/}
}