Efficient Designs of SLOPE Penalty Sequences in Finite Dimension
Abstract
In linear regression, SLOPE is a new convex analysis method that generalizes the Lasso via the sorted $\ell_1$ penalty: larger fitted coefficients are penalized more heavily. This magnitude-dependent regularization requires an input of penalty sequence $\blam$, instead of a scalar penalty as in the Lasso case, thus making the design extremely expensive in computation. In this paper, we propose two efficient algorithms to design the possibly high-dimensional SLOPE penalty, in order to minimize the mean squared error. For Gaussian data matrices, we propose a first order Projected Gradient Descent (PGD) under the Approximate Message Passing regime. For general data matrices, we present a zero-th order Coordinate Descent (CD) to design a sub-class of SLOPE, referred to as the $k$-level SLOPE. Our CD allows a useful trade-off between the accuracy and the computation speed. We demonstrate the performance of SLOPE with our designs via extensive experiments on synthetic data and real-world datasets.
Cite
Text
Zhang and Bu. "Efficient Designs of SLOPE Penalty Sequences in Finite Dimension." Artificial Intelligence and Statistics, 2021.Markdown
[Zhang and Bu. "Efficient Designs of SLOPE Penalty Sequences in Finite Dimension." Artificial Intelligence and Statistics, 2021.](https://mlanthology.org/aistats/2021/zhang2021aistats-efficient/)BibTeX
@inproceedings{zhang2021aistats-efficient,
title = {{Efficient Designs of SLOPE Penalty Sequences in Finite Dimension}},
author = {Zhang, Yiliang and Bu, Zhiqi},
booktitle = {Artificial Intelligence and Statistics},
year = {2021},
pages = {3277-3285},
volume = {130},
url = {https://mlanthology.org/aistats/2021/zhang2021aistats-efficient/}
}