Mixed Nash for Robust Federated Learning

Abstract

We study robust federated learning (FL) within a game theoretic framework to alleviate the server vulnerabilities to even an informed adversary who can tailor training-time attacks. Specifically, we introduce RobustTailor, a simulation-based framework that prevents the adversary from being omniscient and derives its convergence guarantees. RobustTailor improves robustness to training-time attacks significantly while preserving almost the same privacy guarantees as standard robust aggregation schemes in FL. Empirical results under challenging attacks show that RobustTailor performs close to an upper bound with perfect knowledge of honest clients.

Cite

Text

Xie et al. "Mixed Nash for Robust Federated Learning." Transactions on Machine Learning Research, 2024.

Markdown

[Xie et al. "Mixed Nash for Robust Federated Learning." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/xie2024tmlr-mixed/)

BibTeX

@article{xie2024tmlr-mixed,
  title     = {{Mixed Nash for Robust Federated Learning}},
  author    = {Xie, Wanyun and Pethick, Thomas and Ramezani-Kebrya, Ali and Cevher, Volkan},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/xie2024tmlr-mixed/}
}