B-Bit Marginal Regression
Abstract
We consider the problem of sparse signal recovery from $m$ linear measurements quantized to $b$ bits. $b$-bit Marginal Regression is proposed as recovery algorithm. We study the question of choosing $b$ in the setting of a given budget of bits $B = m \cdot b$ and derive a single easy-to-compute expression characterizing the trade-off between $m$ and $b$. The choice $b = 1$ turns out to be optimal for estimating the unit vector corresponding to the signal for any level of additive Gaussian noise before quantization as well as for adversarial noise. For $b \geq 2$, we show that Lloyd-Max quantization constitutes an optimal quantization scheme and that the norm of the signal canbe estimated consistently by maximum likelihood.
Cite
Text
Slawski and Li. "B-Bit Marginal Regression." Neural Information Processing Systems, 2015.Markdown
[Slawski and Li. "B-Bit Marginal Regression." Neural Information Processing Systems, 2015.](https://mlanthology.org/neurips/2015/slawski2015neurips-bbit/)BibTeX
@inproceedings{slawski2015neurips-bbit,
title = {{B-Bit Marginal Regression}},
author = {Slawski, Martin and Li, Ping},
booktitle = {Neural Information Processing Systems},
year = {2015},
pages = {2062-2070},
url = {https://mlanthology.org/neurips/2015/slawski2015neurips-bbit/}
}