Weighted L1 and L0 Regularization Using Proximal Operator Splitting Methods
Abstract
This paper develops a joint weighted $\displaystyle \normlone$- and $\displaystyle \normlzero$-norm (WL1L0) regularization method by leveraging proximal operators and translation mapping techniques to mitigate the bias introduced by the $\displaystyle \normlone$-norm in applications to high-dimensional data. A weighting parameter $\alpha$ is incorporated to control the influence of both regularizers. Our broadly applicable model is nonconvex and nonsmooth, but we show convergence for the alternating direction method of multipliers (ADMM) and the strictly contractive Peaceman–Rachford splitting method (SCPRSM). Moreover, we evaluate the effectiveness of our model on both simulated and real high-dimensional genomic datasets by comparing with adaptive versions of the least absolute shrinkage and selection operator (LASSO), elastic net (EN), smoothly clipped absolute deviation (SCAD) and minimax concave penalty (MCP). The results show that WL1L0 outperforms the LASSO, EN, SCAD and MCP by consistently achieving the lowest mean squared error (MSE) across all datasets, indicating its superior ability to handling large high-dimensional data. Furthermore, the WL1L0-SCPRSM also achieves the sparsest solution.
Cite
Text
Berkessa and Waldmann. "Weighted L1 and L0 Regularization Using Proximal Operator Splitting Methods." Transactions on Machine Learning Research, 2024.Markdown
[Berkessa and Waldmann. "Weighted L1 and L0 Regularization Using Proximal Operator Splitting Methods." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/berkessa2024tmlr-weighted/)BibTeX
@article{berkessa2024tmlr-weighted,
title = {{Weighted L1 and L0 Regularization Using Proximal Operator Splitting Methods}},
author = {Berkessa, Zewude A. and Waldmann, Patrik},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/berkessa2024tmlr-weighted/}
}