FedX: Unsupervised Federated Learning with Cross Knowledge Distillation
Abstract
This paper presents FedX, an unsupervised federated learning framework. Our model learns unbiased representation from decentralized and heterogeneous local data. It employs a two-sided knowledge distillation with contrastive learning as a core component, allowing the federated system to function without requiring clients to share any data features. Furthermore, its adaptable architecture can be used as an add-on module for existing unsupervised algorithms in federated settings. Experiments show that our model improves performance significantly (1.58--5.52pp) on five unsupervised algorithms.
Cite
Text
Han et al. "FedX: Unsupervised Federated Learning with Cross Knowledge Distillation." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-20056-4_40Markdown
[Han et al. "FedX: Unsupervised Federated Learning with Cross Knowledge Distillation." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/han2022eccv-fedx/) doi:10.1007/978-3-031-20056-4_40BibTeX
@inproceedings{han2022eccv-fedx,
title = {{FedX: Unsupervised Federated Learning with Cross Knowledge Distillation}},
author = {Han, Sungwon and Park, Sungwon and Wu, Fangzhao and Kim, Sundong and Wu, Chuhan and Xie, Xing and Cha, Meeyoung},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2022},
doi = {10.1007/978-3-031-20056-4_40},
url = {https://mlanthology.org/eccv/2022/han2022eccv-fedx/}
}