Byzantine-Tolerant Methods for Distributed Variational Inequalities
Abstract
Robustness to Byzantine attacks is a necessity for various distributed training scenarios. When the training reduces to the process of solving a minimization problem, Byzantine robustness is relatively well-understood. However, other problem formulations, such as min-max problems or, more generally, variational inequalities, arise in many modern machine learning and, in particular, distributed learning tasks. These problems significantly differ from the standard minimization ones and, therefore, require separate consideration. Nevertheless, only one work [Abidi et al., 2022] addresses this important question in the context of Byzantine robustness. Our work makes a further step in this direction by providing several (provably) Byzantine-robust methods for distributed variational inequality, thoroughly studying their theoretical convergence, removing the limitations of the previous work, and providing numerical comparisons supporting the theoretical findings.
Cite
Text
Tupitsa et al. "Byzantine-Tolerant Methods for Distributed Variational Inequalities." Neural Information Processing Systems, 2023.Markdown
[Tupitsa et al. "Byzantine-Tolerant Methods for Distributed Variational Inequalities." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/tupitsa2023neurips-byzantinetolerant/)BibTeX
@inproceedings{tupitsa2023neurips-byzantinetolerant,
title = {{Byzantine-Tolerant Methods for Distributed Variational Inequalities}},
author = {Tupitsa, Nazarii and Almansoori, Abdulla Jasem and Wu, Yanlin and Takac, Martin and Nandakumar, Karthik and Horváth, Samuel and Gorbunov, Eduard},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/tupitsa2023neurips-byzantinetolerant/}
}