The Generalized Lasso with Nonlinear Observations and Generative Priors

Abstract

In this paper, we study the problem of signal estimation from noisy non-linear measurements when the unknown $n$-dimensional signal is in the range of an $L$-Lipschitz continuous generative model with bounded $k$-dimensional inputs. We make the assumption of sub-Gaussian measurements, which is satisfied by a wide range of measurement models, such as linear, logistic, 1-bit, and other quantized models. In addition, we consider the impact of adversarial corruptions on these measurements. Our analysis is based on a generalized Lasso approach (Plan and Vershynin, 2016). We first provide a non-uniform recovery guarantee, which states that under i.i.d.~Gaussian measurements, roughly $O\left(\frac{k}{\epsilon^2}\log L\right)$ samples suffice for recovery with an $\ell_2$-error of $\epsilon$, and that this scheme is robust to adversarial noise. Then, we apply this result to neural network generative models, and discuss various extensions to other models and non-i.i.d.~measurements. Moreover, we show that our result can be extended to the uniform recovery guarantee under the assumption of a so-called local embedding property, which is satisfied by the 1-bit and censored Tobit models.

Cite

Text

Liu and Scarlett. "The Generalized Lasso with Nonlinear Observations and Generative Priors." Neural Information Processing Systems, 2020.

Markdown

[Liu and Scarlett. "The Generalized Lasso with Nonlinear Observations and Generative Priors." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/liu2020neurips-generalized/)

BibTeX

@inproceedings{liu2020neurips-generalized,
  title     = {{The Generalized Lasso with Nonlinear Observations and Generative Priors}},
  author    = {Liu, Zhaoqiang and Scarlett, Jonathan},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/liu2020neurips-generalized/}
}