Adaptive Consensus ADMM for Distributed Optimization
Abstract
The alternating direction method of multipliers (ADMM) is commonly used for distributed model fitting problems, but its performance and reliability depend strongly on user-defined penalty parameters. We study distributed ADMM methods that boost performance by using different fine-tuned algorithm parameters on each worker node. We present a O(1/k) convergence rate for adaptive ADMM methods with node-specific parameters, and propose adaptive consensus ADMM (ACADMM), which automatically tunes parameters without user oversight.
Cite
Text
Xu et al. "Adaptive Consensus ADMM for Distributed Optimization." International Conference on Machine Learning, 2017.Markdown
[Xu et al. "Adaptive Consensus ADMM for Distributed Optimization." International Conference on Machine Learning, 2017.](https://mlanthology.org/icml/2017/xu2017icml-adaptive/)BibTeX
@inproceedings{xu2017icml-adaptive,
title = {{Adaptive Consensus ADMM for Distributed Optimization}},
author = {Xu, Zheng and Taylor, Gavin and Li, Hao and Figueiredo, Mário A. T. and Yuan, Xiaoming and Goldstein, Tom},
booktitle = {International Conference on Machine Learning},
year = {2017},
pages = {3841-3850},
volume = {70},
url = {https://mlanthology.org/icml/2017/xu2017icml-adaptive/}
}