An Approach to One-Bit Compressed Sensing Based on Probably Approximately Correct Learning Theory
Abstract
In this paper, the problem of one-bit compressed sensing (OBCS) is formulated as a problem in probably approximately correct (PAC) learning. It is shown that the Vapnik-Chervonenkis (VC-) dimension of the set of half-spaces in $\R^n$ generated by $k$-sparse vectors is bounded below by $k ( \lfloor\lg (n/k) \rfloor +1 )$ and above by $\lfloor 2k \lg (en) \rfloor $. By coupling this estimate with well-established results in PAC learning theory, we show that a consistent algorithm can recover a $k$-sparse vector with $O(k \lg n)$ measurements, given only the signs of the measurement vector. This result holds for \textit{all} probability measures on $\R^n$. The theory is also applicable to the case of noisy labels, where the signs of the measurements are flipped with some unknown probability.
Cite
Text
Ahsen and Vidyasagar. "An Approach to One-Bit Compressed Sensing Based on Probably Approximately Correct Learning Theory." Journal of Machine Learning Research, 2019.Markdown
[Ahsen and Vidyasagar. "An Approach to One-Bit Compressed Sensing Based on Probably Approximately Correct Learning Theory." Journal of Machine Learning Research, 2019.](https://mlanthology.org/jmlr/2019/ahsen2019jmlr-approach/)BibTeX
@article{ahsen2019jmlr-approach,
title = {{An Approach to One-Bit Compressed Sensing Based on Probably Approximately Correct Learning Theory}},
author = {Ahsen, Mehmet Eren and Vidyasagar, Mathukumalli},
journal = {Journal of Machine Learning Research},
year = {2019},
pages = {1-23},
volume = {20},
url = {https://mlanthology.org/jmlr/2019/ahsen2019jmlr-approach/}
}