Federated Learning with Noisy Labels: Achieving Generalization in the Face of Label Noise
Abstract
Federated Learning (FL) is a distributed machine learning paradigm that enables learning models from decentralized private datasets, where the labeling effort is entrusted to the clients. While most existing FL approaches assume high-quality labels are readily available on users' devices; in reality, label noise can naturally occur in FL and follows a non-i.i.d. distribution among clients. Due to the ``non-iid-ness'' challenges, existing state-of-the-art centralized approaches exhibit unsatisfactory performance, while previous FL studies rely on data exchange or repeated server-side aid to improve model's performance. Here, we propose FedLN, a framework to deal with label noise across different FL training stages; namely, FL initialization, and server-side model aggregation. Extensive experiments on various publicly available vision and audio datasets demonstrate an improvement of 24% on average compared to state-of-the-art methods for a label noise level of 70%.
Cite
Text
Tsouvalas et al. "Federated Learning with Noisy Labels: Achieving Generalization in the Face of Label Noise." NeurIPS 2022 Workshops: INTERPOLATE, 2022.Markdown
[Tsouvalas et al. "Federated Learning with Noisy Labels: Achieving Generalization in the Face of Label Noise." NeurIPS 2022 Workshops: INTERPOLATE, 2022.](https://mlanthology.org/neuripsw/2022/tsouvalas2022neuripsw-federated/)BibTeX
@inproceedings{tsouvalas2022neuripsw-federated,
title = {{Federated Learning with Noisy Labels: Achieving Generalization in the Face of Label Noise}},
author = {Tsouvalas, Vasileios and Saeed, Aaqib and Özçelebi, Tanir and Meratnia, Nirvana},
booktitle = {NeurIPS 2022 Workshops: INTERPOLATE},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/tsouvalas2022neuripsw-federated/}
}