Fast Stochastic Alternating Direction Method of Multipliers
Abstract
We propose a new stochastic alternating direction method of multipliers (ADMM) algorithm, which incrementally approximates the full gradient in the linearized ADMM formulation. Besides having a low per-iteration complexity as existing stochastic ADMM algorithms, it improves the convergence rate on convex problems from \mO(1/\sqrt{T}) to \mO(1/T), where T is the number of iterations. This matches the convergence rate of the batch ADMM algorithm, but without the need to visit all the samples in each iteration. Experiments on the graph-guided fused lasso demonstrate that the new algorithm is significantly faster than state-of-the-art stochastic and batch ADMM algorithms.
Cite
Text
Zhong and Kwok. "Fast Stochastic Alternating Direction Method of Multipliers." International Conference on Machine Learning, 2014.Markdown
[Zhong and Kwok. "Fast Stochastic Alternating Direction Method of Multipliers." International Conference on Machine Learning, 2014.](https://mlanthology.org/icml/2014/zhong2014icml-fast/)BibTeX
@inproceedings{zhong2014icml-fast,
title = {{Fast Stochastic Alternating Direction Method of Multipliers}},
author = {Zhong, Wenliang and Kwok, James},
booktitle = {International Conference on Machine Learning},
year = {2014},
pages = {46-54},
volume = {32},
url = {https://mlanthology.org/icml/2014/zhong2014icml-fast/}
}