FedTrans: Client-Transparent Utility Estimation for Robust Federated Learning
Abstract
Federated Learning (FL) is an important privacy-preserving learning paradigm that plays an important role in the Intelligent Internet of Things. Training a global model in FL, however, is vulnerable to the noise in the heterogeneous data across the clients. In this paper, we introduce **FedTrans**, a novel client-transparent client utility estimation method designed to guide client selection for noisy scenarios, mitigating performance degradation problems. To estimate the client utility, we propose a Bayesian framework that models client utility and its relationships with the weight parameters and the performance of local models. We then introduce a variational inference algorithm to effectively infer client utility, given only a small amount of auxiliary data. Our evaluation demonstrates that leveraging FedTrans as a guide for client selection can lead to a better accuracy performance (up to 7.8\%), ensuring robustness in noisy scenarios.
Cite
Text
Yang et al. "FedTrans: Client-Transparent Utility Estimation for Robust Federated Learning." International Conference on Learning Representations, 2024.Markdown
[Yang et al. "FedTrans: Client-Transparent Utility Estimation for Robust Federated Learning." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/yang2024iclr-fedtrans/)BibTeX
@inproceedings{yang2024iclr-fedtrans,
title = {{FedTrans: Client-Transparent Utility Estimation for Robust Federated Learning}},
author = {Yang, Mingkun and Zhu, Ran and Wang, Qing and Yang, Jie},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/yang2024iclr-fedtrans/}
}