Fast Saddle-Point Algorithm for Generalized Dantzig Selector and FDR Control with Ordered L1-Norm
Abstract
In this paper we propose a primal-dual proximal extragradient algorithm to solve the generalized Dantzig selector (GDS) estimation problem, based on a new convex-concave saddle-point (SP) reformulation. Our new formulation makes it possible to adopt recent developments in saddle-point optimization, to achieve the optimal $O(1/k)$ rate of convergence. Compared to the optimal non-SP algorithms, ours do not require specification of sensitive parameters that affect algorithm performance or solution quality. We also provide a new analysis showing a possibility of local acceleration to achieve the rate of $O(1/k^2)$ in special cases even without strong convexity or strong smoothness. As an application, we propose a GDS equipped with the ordered $\ell_1$-norm, showing its false discovery rate control properties in variable selection. Algorithm performance is compared between ours and other alternatives, including the linearized ADMM, Nesterov's smoothing, Nemirovski's mirror-prox, and the accelerated hybrid proximal extragradient techniques.
Cite
Text
Lee et al. "Fast Saddle-Point Algorithm for Generalized Dantzig Selector and FDR Control with Ordered L1-Norm." International Conference on Artificial Intelligence and Statistics, 2016.Markdown
[Lee et al. "Fast Saddle-Point Algorithm for Generalized Dantzig Selector and FDR Control with Ordered L1-Norm." International Conference on Artificial Intelligence and Statistics, 2016.](https://mlanthology.org/aistats/2016/lee2016aistats-fast/)BibTeX
@inproceedings{lee2016aistats-fast,
title = {{Fast Saddle-Point Algorithm for Generalized Dantzig Selector and FDR Control with Ordered L1-Norm}},
author = {Lee, Sangkyun and Brzyski, Damian and Bogdan, Malgorzata},
booktitle = {International Conference on Artificial Intelligence and Statistics},
year = {2016},
pages = {780-789},
url = {https://mlanthology.org/aistats/2016/lee2016aistats-fast/}
}