Towards an Optimal Stochastic Alternating Direction Method of Multipliers

Abstract

We study regularized stochastic convex optimization subject to linear equality constraints. This class of problems was recently also studied by Ouyang et al. (2013) and Suzuki (2013); both introduced similar stochastic alternating direction method of multipliers (SADMM) algorithms. However, the analysis of both papers led to suboptimal convergence rates. This paper presents two new SADMM methods: (i) the first attains the minimax optimal rate of O(1/k) for nonsmooth strongly-convex stochastic problems; while (ii) the second progresses towards an optimal rate by exhibiting an O(1/k^2) rate for the smooth part. We present several experiments with our new methods; the results indicate improved performance over competing ADMM methods.

Cite

Text

Azadi and Sra. "Towards an Optimal Stochastic Alternating Direction Method of Multipliers." International Conference on Machine Learning, 2014.

Markdown

[Azadi and Sra. "Towards an Optimal Stochastic Alternating Direction Method of Multipliers." International Conference on Machine Learning, 2014.](https://mlanthology.org/icml/2014/azadi2014icml-optimal/)

BibTeX

@inproceedings{azadi2014icml-optimal,
  title     = {{Towards an Optimal Stochastic Alternating Direction Method of Multipliers}},
  author    = {Azadi, Samaneh and Sra, Suvrit},
  booktitle = {International Conference on Machine Learning},
  year      = {2014},
  pages     = {620-628},
  volume    = {32},
  url       = {https://mlanthology.org/icml/2014/azadi2014icml-optimal/}
}