Exploiting Strong Convexity from Data with Primal-Dual First-Order Algorithms

Abstract

We consider empirical risk minimization of linear predictors with convex loss functions. Such problems can be reformulated as convex-concave saddle point problems and solved by primal-dual first-order algorithms. However, primal-dual algorithms often require explicit strongly convex regularization in order to obtain fast linear convergence, and the required dual proximal mapping may not admit closed-form or efficient solution. In this paper, we develop both batch and randomized primal-dual algorithms that can exploit strong convexity from data adaptively and are capable of achieving linear convergence even without regularization. We also present dual-free variants of adaptive primal-dual algorithms that do not need the dual proximal mapping, which are especially suitable for logistic regression.

Cite

Text

Wang and Xiao. "Exploiting Strong Convexity from Data with Primal-Dual First-Order Algorithms." International Conference on Machine Learning, 2017.

Markdown

[Wang and Xiao. "Exploiting Strong Convexity from Data with Primal-Dual First-Order Algorithms." International Conference on Machine Learning, 2017.](https://mlanthology.org/icml/2017/wang2017icml-exploiting/)

BibTeX

@inproceedings{wang2017icml-exploiting,
  title     = {{Exploiting Strong Convexity from Data with Primal-Dual First-Order Algorithms}},
  author    = {Wang, Jialei and Xiao, Lin},
  booktitle = {International Conference on Machine Learning},
  year      = {2017},
  pages     = {3694-3702},
  volume    = {70},
  url       = {https://mlanthology.org/icml/2017/wang2017icml-exploiting/}
}