Byzantine-Robust Federated Learning with Optimal Statistical Rates
Abstract
We propose Byzantine-robust federated learning protocols with nearly optimal statistical rates based on recent progress in high dimensional robust statistics. In contrast to prior work, our proposed protocols improve the dimension dependence and achieve a near-optimal statistical rate for strongly convex losses. We also provide statistical lower bound for the problem. For experiments, we benchmark against competing protocols and show the empirical superiority of the proposed protocols.
Cite
Text
Zhu et al. "Byzantine-Robust Federated Learning with Optimal Statistical Rates." Artificial Intelligence and Statistics, 2023.Markdown
[Zhu et al. "Byzantine-Robust Federated Learning with Optimal Statistical Rates." Artificial Intelligence and Statistics, 2023.](https://mlanthology.org/aistats/2023/zhu2023aistats-byzantinerobust/)BibTeX
@inproceedings{zhu2023aistats-byzantinerobust,
title = {{Byzantine-Robust Federated Learning with Optimal Statistical Rates}},
author = {Zhu, Banghua and Wang, Lun and Pang, Qi and Wang, Shuai and Jiao, Jiantao and Song, Dawn and Jordan, Michael I.},
booktitle = {Artificial Intelligence and Statistics},
year = {2023},
pages = {3151-3178},
volume = {206},
url = {https://mlanthology.org/aistats/2023/zhu2023aistats-byzantinerobust/}
}