Accelerated Projected Gradient Algorithms for Sparsity Constrained Optimization Problems
Abstract
We consider the projected gradient algorithm for the nonconvex best subset selection problem that minimizes a given empirical loss function under an $\ell_0$-norm constraint. Through decomposing the feasible set of the given sparsity constraint as a finite union of linear subspaces, we present two acceleration schemes with global convergence guarantees, one by same-space extrapolation and the other by subspace identification. The former fully utilizes the problem structure to greatly accelerate the optimization speed with only negligible additional cost. The latter leads to a two-stage meta-algorithm that first uses classical projected gradient iterations to identify the correct subspace containing an optimal solution, and then switches to a highly-efficient smooth optimization method in the identified subspace to attain superlinear convergence. Experiments demonstrate that the proposed accelerated algorithms are magnitudes faster than their non-accelerated counterparts as well as the state of the art.
Cite
Text
Alcantara and Lee. "Accelerated Projected Gradient Algorithms for Sparsity Constrained Optimization Problems." Neural Information Processing Systems, 2022.Markdown
[Alcantara and Lee. "Accelerated Projected Gradient Algorithms for Sparsity Constrained Optimization Problems." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/alcantara2022neurips-accelerated/)BibTeX
@inproceedings{alcantara2022neurips-accelerated,
title = {{Accelerated Projected Gradient Algorithms for Sparsity Constrained Optimization Problems}},
author = {Alcantara, Jan Harold and Lee, Ching-pei},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/alcantara2022neurips-accelerated/}
}