Decentralized Stochastic Optimization with Client Sampling
Abstract
Decentralized optimization is a key setting toward enabling data privacy and on-device learning over networks. Existing research primarily focuses on distributing the objective function across $n$ nodes/clients, lagging behind the real-world challenges such as i) node availability---not all $n$ nodes are always available during the optimization---and ii) slow information propagation (caused by a large number of nodes $n$). In this work, we study Decentralized Stochastic Gradient Descent (D-SGD) with node subsampling, i.e. when only $s~(s \leq n)$ nodes are randomly sampled out of $n$ nodes per iteration. We provide the theoretical convergence rates in smooth (convex and non-convex) problems with heterogeneous (non-identically distributed data) functions. Our theoretical results capture the effect of node subsampling and choice of the topology on the sampled nodes, through a metric termed \emph{the expected consensus rate}. On a number of common topologies, including ring and torus, we theoretically and empirically demonstrate the effectiveness of such a metric.
Cite
Text
Liu et al. "Decentralized Stochastic Optimization with Client Sampling." NeurIPS 2022 Workshops: OPT, 2022.Markdown
[Liu et al. "Decentralized Stochastic Optimization with Client Sampling." NeurIPS 2022 Workshops: OPT, 2022.](https://mlanthology.org/neuripsw/2022/liu2022neuripsw-decentralized/)BibTeX
@inproceedings{liu2022neuripsw-decentralized,
title = {{Decentralized Stochastic Optimization with Client Sampling}},
author = {Liu, Ziwei and Koloskova, Anastasia and Jaggi, Martin and Lin, Tao},
booktitle = {NeurIPS 2022 Workshops: OPT},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/liu2022neuripsw-decentralized/}
}