FedP3: Federated Personalized and Privacy-Friendly Network Pruning Under Model Heterogeneity
Abstract
The interest in federated learning has surged in recent research due to its unique ability to train a global model using privacy-secured information held locally on each client. This paper pays particular attention to the issue of client-side model heterogeneity, a pervasive challenge in the practical implementation of FL that escalates its complexity. Assuming a scenario where each client possesses varied memory storage, processing capabilities and network bandwidth - a phenomenon referred to as system heterogeneity - there is a pressing need to customize a unique model for each client. In response to this, we present an effective and adaptable federated framework FedP3, representing Federated Personalized and Privacy-friendly network Pruning, tailored for model heterogeneity scenarios. Our proposed methodology can incorporate and adapt well-established techniques to its specific instances. We offer a theoretical interpretation of FedP3 and its locally differential-private variant, DP-FedP3, and theoretically validate their efficiencies.
Cite
Text
Yi et al. "FedP3: Federated Personalized and Privacy-Friendly Network Pruning Under Model Heterogeneity." International Conference on Learning Representations, 2024.Markdown
[Yi et al. "FedP3: Federated Personalized and Privacy-Friendly Network Pruning Under Model Heterogeneity." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/yi2024iclr-fedp3/)BibTeX
@inproceedings{yi2024iclr-fedp3,
title = {{FedP3: Federated Personalized and Privacy-Friendly Network Pruning Under Model Heterogeneity}},
author = {Yi, Kai and Gazagnadou, Nidham and Richtárik, Peter and Lyu, Lingjuan},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/yi2024iclr-fedp3/}
}