FedSPU: Personalized Federated Learning for Resource-Constrained Devices with Stochastic Parameter Update
Abstract
Personalized Federated Learning (PFL) is widely employed in the Internet of Things (IoT) to handle high-volume, non-iid client data while ensuring data privacy. However, heterogeneous edge devices owned by clients may impose varying degrees of resource constraints, causing computation and communication bottlenecks for PFL. Federated Dropout has emerged as a popular strategy to address this challenge, wherein only a subset of the global model, i.e. a sub-model, is trained on a client's device, thereby reducing computation and communication overheads. Nevertheless, the dropout-based model-pruning strategy may introduce bias, particularly towards non-iid local data. When biased sub-models absorb highly divergent parameters from other clients, performance degradation becomes inevitable. In response, we propose federated learning with stochastic parameter update (FedSPU). Unlike dropout that tailors local models to small-size sub-models, FedSPU maintains the full model architecture on each device but randomly freezes a certain percentage of neurons in the local model during training while updating the remaining neurons. This approach ensures that a portion of the local model remains personalized, thereby enhancing the model's robustness against biased parameters from other clients. Experimental results demonstrate that FedSPU outperforms federated dropout by 4.45% on average in terms of accuracy. Furthermore, an introduced early stopping scheme leads to a significant reduction of the training time in FedSPU by 25%~71% while maintaining high accuracy.
Cite
Text
Niu et al. "FedSPU: Personalized Federated Learning for Resource-Constrained Devices with Stochastic Parameter Update." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I18.34172Markdown
[Niu et al. "FedSPU: Personalized Federated Learning for Resource-Constrained Devices with Stochastic Parameter Update." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/niu2025aaai-fedspu/) doi:10.1609/AAAI.V39I18.34172BibTeX
@inproceedings{niu2025aaai-fedspu,
title = {{FedSPU: Personalized Federated Learning for Resource-Constrained Devices with Stochastic Parameter Update}},
author = {Niu, Ziru and Dong, Hai and Qin, A. Kai},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {19721-19729},
doi = {10.1609/AAAI.V39I18.34172},
url = {https://mlanthology.org/aaai/2025/niu2025aaai-fedspu/}
}