GPT-FL: Generative Pre-Trained Model-Assisted Federated Learning
Abstract
In this work, we propose GPT-FL, a generative pre-trained model-assisted federated learning (FL) framework. At its core, GPT-FL leverages generative pre-trained models to generate diversified synthetic data. These generated data are used to train a downstream model on the server, which is then fine-tuned with private client data under the standard FL framework. We show that GPT-FL consistently outperforms state-of-the-art FL methods in terms of model test accuracy, communication efficiency, and client sampling efficiency. Through comprehensive ablation analysis, we discover that the downstream model generated by synthetic data plays a crucial role in controlling the direction of gradient diversity during FL training, which enhances convergence speed and contributes to the notable accuracy boost observed with GPT-FL. Also, regardless of whether the target data falls within or outside the domain of the pre-trained generative model, GPT-FL consistently achieves significant performance gains, surpassing the results obtained by models trained solely with FL or synthetic data.
Cite
Text
Zhang et al. "GPT-FL: Generative Pre-Trained Model-Assisted Federated Learning." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2025.Markdown
[Zhang et al. "GPT-FL: Generative Pre-Trained Model-Assisted Federated Learning." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2025.](https://mlanthology.org/cvprw/2025/zhang2025cvprw-gptfl/)BibTeX
@inproceedings{zhang2025cvprw-gptfl,
title = {{GPT-FL: Generative Pre-Trained Model-Assisted Federated Learning}},
author = {Zhang, Tuo and Feng, Tiantian and Alam, Samiul and Dimitriadis, Dimitrios and Lee, Sunwoo and Zhang, Mi and Narayanan, Shrikanth S. and Avestimehr, Salman},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2025},
pages = {1761-1770},
url = {https://mlanthology.org/cvprw/2025/zhang2025cvprw-gptfl/}
}