When to Trust Aggregated Gradients: Addressing Negative Client Sampling in Federated Learning

Abstract

Federated Learning has become a widely-used framework which allows learning a global model on decentralized local datasets under the condition of protecting local data privacy. However, federated learning faces severe optimization difficulty when training samples are not independently and identically distributed (non-i.i.d.). In this paper, we point out that the client sampling practice plays a decisive role in the aforementioned optimization difficulty. We find that the negative client sampling will cause the merged data distribution of currently sampled clients heavily inconsistent with that of all available clients, and further make the aggregated gradient unreliable. To address this issue, we propose a novel learning rate adaptation mechanism to adaptively adjust the server learning rate for the aggregated gradient in each round, according to the consistency between the merged data distribution of currently sampled clients and that of all available clients. Specifically, we make theoretical deductions to find a meaningful and robust indicator that is positively related to the optimal server learning rate, which is supposed to minimize the Euclidean distance between the aggregated gradient given currently sampled clients and that if all clients could participate in the current round. We show that our proposed indicator can effectively reflect the merged data distribution of sampled clients, thus we utilize it for the server learning rate adaptation. Extensive experiments on multiple image and text classification tasks validate the great effectiveness of our method in various settings. Our code is available at https://github.com/lancopku/FedGLAD.

Cite

Text

Yang et al. "When to Trust Aggregated Gradients: Addressing Negative Client Sampling in Federated Learning." Transactions on Machine Learning Research, 2023.

Markdown

[Yang et al. "When to Trust Aggregated Gradients: Addressing Negative Client Sampling in Federated Learning." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/yang2023tmlr-trust/)

BibTeX

@article{yang2023tmlr-trust,
  title     = {{When to Trust Aggregated Gradients: Addressing Negative Client Sampling in Federated Learning}},
  author    = {Yang, Wenkai and Lin, Yankai and Zhao, Guangxiang and Li, Peng and Zhou, Jie and Sun, Xu},
  journal   = {Transactions on Machine Learning Research},
  year      = {2023},
  url       = {https://mlanthology.org/tmlr/2023/yang2023tmlr-trust/}
}