On the Power of Compressed Sensing with Generative Models

Abstract

The goal of compressed sensing is to learn a structured signal $x$ from a limited number of noisy linear measurements $y \approx Ax$. In traditional compressed sensing, “structure” is represented by sparsity in some known basis. Inspired by the success of deep learning in modeling images, recent work starting with Bora-Jalal-Price-Dimakis’17 has instead considered structure to come from a generative model $G: \mathbb{R}^k \to \mathbb{R}^n$. We present two results establishing the difficulty and strength of this latter task, showing that existing bounds are tight: First, we provide a lower bound matching the Bora et.al upper bound for compressed sensing with $L$-Lipschitz generative models $G$ which holds even for the more relaxed goal of \emph{non-uniform} recovery. Second, we show that generative models generalize sparsity as a representation of structure by constructing a ReLU-based neural network with $2$ hidden layers and $O(n)$ activations per layer whose range is precisely the set of all $k$-sparse vectors.

Cite

Text

Kamath et al. "On the Power of Compressed Sensing with Generative Models." International Conference on Machine Learning, 2020.

Markdown

[Kamath et al. "On the Power of Compressed Sensing with Generative Models." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/kamath2020icml-power/)

BibTeX

@inproceedings{kamath2020icml-power,
  title     = {{On the Power of Compressed Sensing with Generative Models}},
  author    = {Kamath, Akshay and Price, Eric and Karmalkar, Sushrut},
  booktitle = {International Conference on Machine Learning},
  year      = {2020},
  pages     = {5101-5109},
  volume    = {119},
  url       = {https://mlanthology.org/icml/2020/kamath2020icml-power/}
}