Lower Bounds for Compressed Sensing with Generative Models
Abstract
The goal of compressed sensing is to learn a structured signal $x$ from a limited number of noisy linear measurements $y \approx Ax$. In traditional compressed sensing, ``structure'' is represented by sparsity in some known basis. Inspired by the success of deep learning in modeling images, recent work starting with~\cite{BDJP17} has instead considered structure to come from a generative model $G: \R^k \to \R^n$. We present two results establishing the difficulty of this latter task, showing that existing bounds are tight. First, we provide a lower bound matching the~\cite{BDJP17} upper bound for compressed sensing from $L$-Lipschitz generative models $G$. In particular, there exists such a function that requires roughly $\Omega(k \log L)$ linear measurements for sparse recovery to be possible. This holds even for the more relaxed goal of \emph{nonuniform} recovery. Second, we show that generative models generalize sparsity as a representation of structure. In particular, we construct a ReLU-based neural network $G: \R^{2k} \to \R^n$ with $O(1)$ layers and $O(kn)$ activations per layer, such that the range of $G$ contains all $k$-sparse vectors.
Cite
Text
Kamath et al. "Lower Bounds for Compressed Sensing with Generative Models." NeurIPS 2019 Workshops: Deep_Inverse, 2019.Markdown
[Kamath et al. "Lower Bounds for Compressed Sensing with Generative Models." NeurIPS 2019 Workshops: Deep_Inverse, 2019.](https://mlanthology.org/neuripsw/2019/kamath2019neuripsw-lower/)BibTeX
@inproceedings{kamath2019neuripsw-lower,
title = {{Lower Bounds for Compressed Sensing with Generative Models}},
author = {Kamath, Akshay and Karmalkar, Sushrut and Price, Eric},
booktitle = {NeurIPS 2019 Workshops: Deep_Inverse},
year = {2019},
url = {https://mlanthology.org/neuripsw/2019/kamath2019neuripsw-lower/}
}