A General Analysis of the Convergence of ADMM
Abstract
We provide a new proof of the linear convergence of the alternating direction method of multipliers (ADMM) when one of the objective terms is strongly convex. Our proof is based on a framework for analyzing optimization algorithms introduced in Lessard et al. (2014), reducing algorithm convergence to verifying the stability of a dynamical system. This approach generalizes a number of existing results and obviates any assumptions about specific choices of algorithm parameters. On a numerical example, we demonstrate that minimizing the derived bound on the convergence rate provides a practical approach to selecting algorithm parameters for particular ADMM instances. We complement our upper bound by constructing a nearly-matching lower bound on the worst-case rate of convergence.
Cite
Text
Nishihara et al. "A General Analysis of the Convergence of ADMM." International Conference on Machine Learning, 2015.Markdown
[Nishihara et al. "A General Analysis of the Convergence of ADMM." International Conference on Machine Learning, 2015.](https://mlanthology.org/icml/2015/nishihara2015icml-general/)BibTeX
@inproceedings{nishihara2015icml-general,
title = {{A General Analysis of the Convergence of ADMM}},
author = {Nishihara, Robert and Lessard, Laurent and Recht, Ben and Packard, Andrew and Jordan, Michael},
booktitle = {International Conference on Machine Learning},
year = {2015},
pages = {343-352},
volume = {37},
url = {https://mlanthology.org/icml/2015/nishihara2015icml-general/}
}