Single Point-Based Distributed Zeroth-Order Optimization with a Non-Convex Stochastic Objective Function
Abstract
Zero-order (ZO) optimization is a powerful tool for dealing with realistic constraints. On the other hand, the gradient-tracking (GT) technique proved to be an efficient method for distributed optimization aiming to achieve consensus. However, it is a first-order (FO) method that requires knowledge of the gradient, which is not always possible in practice. In this work, we introduce a zero-order distributed optimization method based on a one-point estimate of the gradient tracking technique. We prove that this new technique converges with a single noisy function query at a time in the non-convex setting. We then establish a convergence rate of $O(\frac{1}{\sqrt[3]{K}})$ after a number of iterations K, which competes with that of $O(\frac{1}{\sqrt[4]{K}})$ of its centralized counterparts. Finally, a numerical example validates our theoretical results.
Cite
Text
Mhanna and Assaad. "Single Point-Based Distributed Zeroth-Order Optimization with a Non-Convex Stochastic Objective Function." International Conference on Machine Learning, 2023.Markdown
[Mhanna and Assaad. "Single Point-Based Distributed Zeroth-Order Optimization with a Non-Convex Stochastic Objective Function." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/mhanna2023icml-single/)BibTeX
@inproceedings{mhanna2023icml-single,
title = {{Single Point-Based Distributed Zeroth-Order Optimization with a Non-Convex Stochastic Objective Function}},
author = {Mhanna, Elissa and Assaad, Mohamad},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {24701-24719},
volume = {202},
url = {https://mlanthology.org/icml/2023/mhanna2023icml-single/}
}