Navigating Heterogeneity and Privacy in One-Shot Federated Learning with Diffusion Models
Abstract
Federated learning (FL) enables multiple clients to train models collectively while preserving data privacy. However FL faces challenges in terms of communication cost and data heterogeneity. One-shot federated learning has emerged as a solution by reducing communication rounds improving efficiency and providing better security against eavesdropping attacks. Nevertheless data heterogeneity remains a significant challenge impacting performance. This work explores the effectiveness of diffusion models in one-shot FL demonstrating their applicability in addressing data heterogeneity and improving FL performance. Additionally we investigate the utility of our diffusion model approach FedDiff compared to other one-shot FL methods under differential privacy (DP). Furthermore to improve generated sample quality under DP settings we propose a pragmatic Fourier Magnitude Filtering (FMF) method enhancing the effectiveness of the generated data for global model training. Code available at https://github.com/mmendiet/FedDiff.
Cite
Text
Mendieta et al. "Navigating Heterogeneity and Privacy in One-Shot Federated Learning with Diffusion Models." Winter Conference on Applications of Computer Vision, 2025.Markdown
[Mendieta et al. "Navigating Heterogeneity and Privacy in One-Shot Federated Learning with Diffusion Models." Winter Conference on Applications of Computer Vision, 2025.](https://mlanthology.org/wacv/2025/mendieta2025wacv-navigating/)BibTeX
@inproceedings{mendieta2025wacv-navigating,
title = {{Navigating Heterogeneity and Privacy in One-Shot Federated Learning with Diffusion Models}},
author = {Mendieta, Matias and Sun, Guangyu and Chen, Chen},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2025},
pages = {2601-2610},
url = {https://mlanthology.org/wacv/2025/mendieta2025wacv-navigating/}
}