MocoSFL: Enabling Cross-Client Collaborative Self-Supervised Learning
Abstract
Existing collaborative self-supervised learning (SSL) schemes are not suitable for cross-client applications because of their expensive computation and large local data requirements. To address these issues, we propose MocoSFL, a collaborative SSL framework based on Split Federated Learning (SFL) and Momentum Contrast (MoCo). In MocoSFL, the large backbone model is split into a small client-side model and a large server-side model, and only the small client-side model is processed locally on the client's local devices. MocoSFL is equipped with three components: (i) vector concatenation which enables the use of small batch size and reduces computation and memory requirements by orders of magnitude; (ii) feature sharing that helps achieve high accuracy regardless of the quality and volume of local data; (iii) frequent synchronization that helps achieve better non-IID performance because of smaller local model divergence. For a 1,000-client case with non-IID data (each client has data from 2 random classes of CIFAR-10), MocoSFL can achieve over 84% accuracy with ResNet-18 model.
Cite
Text
Li et al. "MocoSFL: Enabling Cross-Client Collaborative Self-Supervised Learning." NeurIPS 2022 Workshops: Federated_Learning, 2022.Markdown
[Li et al. "MocoSFL: Enabling Cross-Client Collaborative Self-Supervised Learning." NeurIPS 2022 Workshops: Federated_Learning, 2022.](https://mlanthology.org/neuripsw/2022/li2022neuripsw-mocosfl/)BibTeX
@inproceedings{li2022neuripsw-mocosfl,
title = {{MocoSFL: Enabling Cross-Client Collaborative Self-Supervised Learning}},
author = {Li, Jingtao and Lyu, Lingjuan and Iso, Daisuke and Chakrabarti, Chaitali and Spranger, Michael},
booktitle = {NeurIPS 2022 Workshops: Federated_Learning},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/li2022neuripsw-mocosfl/}
}