Maximal Sparsity with Deep Networks?

Abstract

The iterations of many sparse estimation algorithms are comprised of a fixed linear filter cascaded with a thresholding nonlinearity, which collectively resemble a typical neural network layer. Consequently, a lengthy sequence of algorithm iterations can be viewed as a deep network with shared, hand-crafted layer weights. It is therefore quite natural to examine the degree to which a learned network model might act as a viable surrogate for traditional sparse estimation in domains where ample training data is available. While the possibility of a reduced computational budget is readily apparent when a ceiling is imposed on the number of layers, our work primarily focuses on estimation accuracy. In particular, it is well-known that when a signal dictionary has coherent columns, as quantified by a large RIP constant, then most tractable iterative algorithms are unable to find maximally sparse representations. In contrast, we demonstrate both theoretically and empirically the potential for a trained deep network to recover minimal $\ell_0$-norm representations in regimes where existing methods fail. The resulting system, which can effectively learn novel iterative sparse estimation algorithms, is deployed on a practical photometric stereo estimation problem, where the goal is to remove sparse outliers that can disrupt the estimation of surface normals from a 3D scene.

Cite

Text

Xin et al. "Maximal Sparsity with Deep Networks?." Neural Information Processing Systems, 2016.

Markdown

[Xin et al. "Maximal Sparsity with Deep Networks?." Neural Information Processing Systems, 2016.](https://mlanthology.org/neurips/2016/xin2016neurips-maximal/)

BibTeX

@inproceedings{xin2016neurips-maximal,
  title     = {{Maximal Sparsity with Deep Networks?}},
  author    = {Xin, Bo and Wang, Yizhou and Gao, Wen and Wipf, David and Wang, Baoyuan},
  booktitle = {Neural Information Processing Systems},
  year      = {2016},
  pages     = {4340-4348},
  url       = {https://mlanthology.org/neurips/2016/xin2016neurips-maximal/}
}