Federated Learning from Pre-Trained Models: A Contrastive Learning Approach
Abstract
Excessive computation and communication demands pose challenges to current FL frameworks, especially when training large-scale models. To prevent these issues from hindering the deployment of FL systems, we propose a lightweight framework where clients jointly learn to fuse the representations generated by multiple fixed pre-trained models rather than training a large-scale model from scratch. To capture more client-specific and class-relevant information from the pre-trained models and jointly improve each client's ability to exploit those off-the-shelf models, we design a Federated Prototype-wise Contrastive Learning (FedPCL) approach which shares knowledge across clients through their class prototypes and builds client-specific representations in a prototype-wise contrastive manner. We perform a thorough evaluation of the proposed FedPCL in the lightweight framework, measuring its ability to fuse various pre-trained models on popular FL datasets.
Cite
Text
Tan et al. "Federated Learning from Pre-Trained Models: A Contrastive Learning Approach." ICML 2022 Workshops: Pre-Training, 2022.Markdown
[Tan et al. "Federated Learning from Pre-Trained Models: A Contrastive Learning Approach." ICML 2022 Workshops: Pre-Training, 2022.](https://mlanthology.org/icmlw/2022/tan2022icmlw-federated/)BibTeX
@inproceedings{tan2022icmlw-federated,
title = {{Federated Learning from Pre-Trained Models: A Contrastive Learning Approach}},
author = {Tan, Yue and Long, Guodong and Ma, Jie and Liu, Lu and Zhou, Tianyi and Jiang, Jing},
booktitle = {ICML 2022 Workshops: Pre-Training},
year = {2022},
url = {https://mlanthology.org/icmlw/2022/tan2022icmlw-federated/}
}