Strong NP-Hardness for Sparse Optimization with Concave Penalty Functions

Abstract

Consider the regularized sparse minimization problem, which involves empirical sums of loss functions for $n$ data points (each of dimension $d$) and a nonconvex sparsity penalty. We prove that finding an $\mathcal{O}(n^{c_1}d^{c_2})$-optimal solution to the regularized sparse optimization problem is strongly NP-hard for any $c_1, c_2\in [0,1)$ such that $c_1+c_2<1$. The result applies to a broad class of loss functions and sparse penalty functions. It suggests that one cannot even approximately solve the sparse optimization problem in polynomial time, unless P $=$ NP.

Cite

Text

Chen et al. "Strong NP-Hardness for Sparse Optimization with Concave Penalty Functions." International Conference on Machine Learning, 2017.

Markdown

[Chen et al. "Strong NP-Hardness for Sparse Optimization with Concave Penalty Functions." International Conference on Machine Learning, 2017.](https://mlanthology.org/icml/2017/chen2017icml-strong/)

BibTeX

@inproceedings{chen2017icml-strong,
  title     = {{Strong NP-Hardness for Sparse Optimization with Concave Penalty Functions}},
  author    = {Chen, Yichen and Ge, Dongdong and Wang, Mengdi and Wang, Zizhuo and Ye, Yinyu and Yin, Hao},
  booktitle = {International Conference on Machine Learning},
  year      = {2017},
  pages     = {740-747},
  volume    = {70},
  url       = {https://mlanthology.org/icml/2017/chen2017icml-strong/}
}