Zero-Order One-Point Gradient Estimate in Consensus-Based Distributed Stochastic Optimization

Abstract

In this work, we consider a distributed multi-agent stochastic optimization problem, where each agent holds a local objective function that is smooth and strongly convex and that is subject to a stochastic process. The goal is for all agents to collaborate to find a common solution that optimizes the sum of these local functions. With the practical assumption that agents can only obtain noisy numerical function queries at precisely one point at a time, we consider an extention of a standard consensus-based distributed stochastic gradient (DSG) method to the bandit setting where we do not have access to the gradient, and we introduce a zero-order (ZO) one-point estimate (1P-DSG). We analyze the convergence of this techniques using stochastic approximation tools, and we prove that it \textit{converges almost surely to the optimum} despite the biasedness of our gradient estimate. We then study the convergence rate of our method. With constant step sizes, our method competes with its first-order (FO) counterparts by achieving a linear rate $O(\varrho^k)$ as a function of number of iterations $k$. To the best of our knowledge, this is the first work that proves this rate in the noisy estimation setting or with one-point estimators. With vanishing step sizes, we establish a rate of $O(\frac{1}{\sqrt{k}})$ after a sufficient number of iterations $k > K_0$. This rate matches the lower bound of centralized techniques utilizing one-point estimators. We then provide a regret bound of $O(\sqrt{k})$ with vanishing step sizes. We further illustrate the usefulness of the proposed technique using numerical experiments.

Cite

Text

Mhanna and Assaad. "Zero-Order One-Point Gradient Estimate in Consensus-Based Distributed Stochastic Optimization." Transactions on Machine Learning Research, 2024.

Markdown

[Mhanna and Assaad. "Zero-Order One-Point Gradient Estimate in Consensus-Based Distributed Stochastic Optimization." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/mhanna2024tmlr-zeroorder/)

BibTeX

@article{mhanna2024tmlr-zeroorder,
  title     = {{Zero-Order One-Point Gradient Estimate in Consensus-Based Distributed Stochastic Optimization}},
  author    = {Mhanna, Elissa and Assaad, Mohamad},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/mhanna2024tmlr-zeroorder/}
}