Approximation Hardness for a Class of Sparse Optimization Problems
Abstract
In this paper, we consider three typical optimization problems with a convex loss function and a nonconvex sparse penalty or constraint. For the sparse penalized problem, we prove that finding an $\mathcal{O}(n^{c_1}d^{c_2})$-optimal solution to an $n\times d$ problem is strongly NP-hard for any $c_1, c_2\in [0,1)$ such that $c_1+c_2<1$. For two constrained versions of the sparse optimization problem, we show that it is intractable to approximately compute a solution path associated with increasing values of some tuning parameter. The hardness results apply to a broad class of loss functions and sparse penalties. They suggest that one cannot even approximately solve these three problems in polynomial time, unless P $=$ NP.
Cite
Text
Chen et al. "Approximation Hardness for a Class of Sparse Optimization Problems." Journal of Machine Learning Research, 2019.Markdown
[Chen et al. "Approximation Hardness for a Class of Sparse Optimization Problems." Journal of Machine Learning Research, 2019.](https://mlanthology.org/jmlr/2019/chen2019jmlr-approximation/)BibTeX
@article{chen2019jmlr-approximation,
title = {{Approximation Hardness for a Class of Sparse Optimization Problems}},
author = {Chen, Yichen and Ye, Yinyu and Wang, Mengdi},
journal = {Journal of Machine Learning Research},
year = {2019},
pages = {1-27},
volume = {20},
url = {https://mlanthology.org/jmlr/2019/chen2019jmlr-approximation/}
}