Byzantine-Robust Learning on Heterogeneous Data via Gradient Splitting

Abstract

Federated learning has exhibited vulnerabilities to Byzantine attacks, where the Byzantine attackers can send arbitrary gradients to a central server to destroy the convergence and performance of the global model. A wealth of robust AGgregation Rules (AGRs) have been proposed to defend against Byzantine attacks. However, Byzantine clients can still circumvent robust AGRs when data is non-Identically and Independently Distributed (non-IID). In this paper, we first reveal the root causes of performance degradation of current robust AGRs in non-IID settings: the curse of dimensionality and gradient heterogeneity. In order to address this issue, we propose GAS, a GrAdient Splitting approach that can successfully adapt existing robust AGRs to non-IID settings. We also provide a detailed convergence analysis when the existing robust AGRs are combined with GAS. Experiments on various real-world datasets verify the efficacy of our proposed GAS. The implementation code is provided in https://github.com/YuchenLiu-a/byzantine-gas.

Cite

Text

Liu et al. "Byzantine-Robust Learning on Heterogeneous Data via Gradient Splitting." International Conference on Machine Learning, 2023.

Markdown

[Liu et al. "Byzantine-Robust Learning on Heterogeneous Data via Gradient Splitting." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/liu2023icml-byzantinerobust/)

BibTeX

@inproceedings{liu2023icml-byzantinerobust,
  title     = {{Byzantine-Robust Learning on Heterogeneous Data via Gradient Splitting}},
  author    = {Liu, Yuchen and Chen, Chen and Lyu, Lingjuan and Wu, Fangzhao and Wu, Sai and Chen, Gang},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {21404-21425},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/liu2023icml-byzantinerobust/}
}