Modular Federated Contrastive Learning with Twin Normalization for Resource-Limited Clients
Abstract
Despite recent progress in federated learning (FL), the challenge of training a global model across clients, having heterogeneous, class-imbalanced, and unlabeled data, is not fully resolved. Self-supervised learning requires deep and wide networks, and federal training of those networks induces a huge communication/computation burden on the client side. We propose Modular Federated Contrastive Learning (MFCL) by changing the training framework from end-to-end to modular, meaning that instead of federally training the entire network, only the first layers are trained federally through a server, and other layers are trained at another server without any forward/backward passes between servers. We also propose Twin Normalization (TN) to tackle data heterogeneity. Results show that ResNet-18 trained with MFCL(TN) on CIFAR-10 achieves $84.1\%$ accuracy when data is severely heterogeneous while reducing the communication burden and memory footprint compared to end-to-end training. The code will be released upon paper acceptance.
Cite
Text
Motamedi and Kim. "Modular Federated Contrastive Learning with Twin Normalization for Resource-Limited Clients." Transactions on Machine Learning Research, 2024.Markdown
[Motamedi and Kim. "Modular Federated Contrastive Learning with Twin Normalization for Resource-Limited Clients." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/motamedi2024tmlr-modular/)BibTeX
@article{motamedi2024tmlr-modular,
title = {{Modular Federated Contrastive Learning with Twin Normalization for Resource-Limited Clients}},
author = {Motamedi, Azadeh and Kim, Il Min},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/motamedi2024tmlr-modular/}
}