FedPeWS: Personalized Warmup via Subnetworks for Enhanced Heterogeneous Federated Learning
Abstract
Statistical data heterogeneity is a significant barrier to convergence in federated learning (FL). While prior work has advanced heterogeneous FL through better optimization objectives, these methods fall short when there is *extreme* data heterogeneity among collaborating participants. We hypothesize that convergence under extreme data heterogeneity is primarily hindered due to the aggregation of conflicting updates from the participants in the initial collaboration rounds. To overcome this problem, we propose a warmup phase where each participant learns a personalized mask and updates only a subnetwork of the full model. This *personalized warmup* allows the participants to focus initially on learning specific *subnetworks* tailored to the heterogeneity of their data. After the warmup phase, the participants revert to standard federated optimization, where all parameters are communicated. We empirically demonstrate that the proposed personalized warmup via subnetworks (*FedPeWS*) approach improves accuracy and convergence speed over standard federated optimization methods. The code can be found at https://github.com/tnurbek/fedpews.
Cite
Text
Tastan et al. "FedPeWS: Personalized Warmup via Subnetworks for Enhanced Heterogeneous Federated Learning." Conference on Parsimony and Learning, 2025.Markdown
[Tastan et al. "FedPeWS: Personalized Warmup via Subnetworks for Enhanced Heterogeneous Federated Learning." Conference on Parsimony and Learning, 2025.](https://mlanthology.org/cpal/2025/tastan2025cpal-fedpews/)BibTeX
@inproceedings{tastan2025cpal-fedpews,
title = {{FedPeWS: Personalized Warmup via Subnetworks for Enhanced Heterogeneous Federated Learning}},
author = {Tastan, Nurbek and Horváth, Samuel and Takáč, Martin and Nandakumar, Karthik},
booktitle = {Conference on Parsimony and Learning},
year = {2025},
pages = {462-483},
volume = {280},
url = {https://mlanthology.org/cpal/2025/tastan2025cpal-fedpews/}
}