Zeroth-Order Online Alternating Direction Method of Multipliers: Convergence Analysis and Applications

Abstract

In this paper, we design and analyze a new zeroth-order online algorithm, namely, the zeroth-order online alternating direction method of multipliers (ZOO-ADMM), which enjoys dual advantages of being gradient-free operation and employing the ADMM to accommodate complex structured regularizers. Compared to the first-order gradient-based online algorithm, we show that ZOO-ADMM requires $\sqrt{m}$ times more iterations, leading to a convergence rate of $O(\sqrt{m}/\sqrt{T})$, where $m$ is the number of optimization variables, and $T$ is the number of iterations. To accelerate ZOO-ADMM, we propose two minibatch strategies: gradient sample averaging and observation averaging, resulting in an improved convergence rate of $O(\sqrt{1+q^{-1}m}/\sqrt{T})$, where $q$ is the minibatch size. In addition to convergence analysis, we also demonstrate ZOO-ADMM to applications in signal processing, statistics, and machine learning.

Cite

Text

Liu et al. "Zeroth-Order Online Alternating Direction Method of Multipliers: Convergence Analysis and Applications." International Conference on Artificial Intelligence and Statistics, 2018.

Markdown

[Liu et al. "Zeroth-Order Online Alternating Direction Method of Multipliers: Convergence Analysis and Applications." International Conference on Artificial Intelligence and Statistics, 2018.](https://mlanthology.org/aistats/2018/liu2018aistats-zeroth/)

BibTeX

@inproceedings{liu2018aistats-zeroth,
  title     = {{Zeroth-Order Online Alternating Direction Method of Multipliers: Convergence Analysis and Applications}},
  author    = {Liu, Sijia and Chen, Jie and Chen, Pin-Yu and Iii, Alfred O. Hero},
  booktitle = {International Conference on Artificial Intelligence and Statistics},
  year      = {2018},
  pages     = {288-297},
  url       = {https://mlanthology.org/aistats/2018/liu2018aistats-zeroth/}
}