From Local SGD to Local Fixed-Point Methods for Federated Learning

Abstract

Most algorithms for solving optimization problems or finding saddle points of convex-concave functions are fixed-point algorithms. In this work we consider the generic problem of finding a fixed point of an average of operators, or an approximation thereof, in a distributed setting. Our work is motivated by the needs of federated learning. In this context, each local operator models the computations done locally on a mobile device. We investigate two strategies to achieve such a consensus: one based on a fixed number of local steps, and the other based on randomized computations. In both cases, the goal is to limit communication of the locally-computed variables, which is often the bottleneck in distributed frameworks. We perform convergence analysis of both methods and conduct a number of experiments highlighting the benefits of our approach.

Cite

Text

Malinovskiy et al. "From Local SGD to Local Fixed-Point Methods for Federated Learning." International Conference on Machine Learning, 2020.

Markdown

[Malinovskiy et al. "From Local SGD to Local Fixed-Point Methods for Federated Learning." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/malinovskiy2020icml-local/)

BibTeX

@inproceedings{malinovskiy2020icml-local,
  title     = {{From Local SGD to Local Fixed-Point Methods for Federated Learning}},
  author    = {Malinovskiy, Grigory and Kovalev, Dmitry and Gasanov, Elnur and Condat, Laurent and Richtarik, Peter},
  booktitle = {International Conference on Machine Learning},
  year      = {2020},
  pages     = {6692-6701},
  volume    = {119},
  url       = {https://mlanthology.org/icml/2020/malinovskiy2020icml-local/}
}