Towards a Lower Sample Complexity for Robust One-Bit Compressed Sensing
Abstract
In this paper, we propose a novel algorithm based on nonconvex sparsity-inducing penalty for one-bit compressed sensing. We prove that our algorithm has a sample complexity of O(s/ε^2) for strong signals, and O(s\log d/ε^2) for weak signals, where s is the number of nonzero entries in the signal vector, d is the signal dimension and εis the recovery error. For general signals, the sample complexity of our algorithm lies between O(s/ε^2) and O(s\log d/ε^2). This is a remarkable improvement over the existing best sample complexity O(s\log d/ε^2). Furthermore, we show that our algorithm achieves exact support recovery with high probability for strong signals. Our theory is verified by extensive numerical experiments, which clearly illustrate the superiority of our algorithm for both approximate signal and support recovery in the noisy setting.
Cite
Text
Zhu and Gu. "Towards a Lower Sample Complexity for Robust One-Bit Compressed Sensing." International Conference on Machine Learning, 2015.Markdown
[Zhu and Gu. "Towards a Lower Sample Complexity for Robust One-Bit Compressed Sensing." International Conference on Machine Learning, 2015.](https://mlanthology.org/icml/2015/zhu2015icml-lower/)BibTeX
@inproceedings{zhu2015icml-lower,
title = {{Towards a Lower Sample Complexity for Robust One-Bit Compressed Sensing}},
author = {Zhu, Rongda and Gu, Quanquan},
booktitle = {International Conference on Machine Learning},
year = {2015},
pages = {739-747},
volume = {37},
url = {https://mlanthology.org/icml/2015/zhu2015icml-lower/}
}