Learning to Recover Sparse Signals
Abstract
In compressed sensing, a primary problem to solve is to reconstruct a high dimensional sparse signal from a small number of observations. In this work, we develop a new sparse signal recovery algorithm using reinforcement learning (RL) and Monte CarloTree Search (MCTS). Similarly to orthogonal matching pursuit (OMP), our RL+MCTS algorithm chooses the support of the signal sequentially. The key novelty is that the proposed algorithm learns how to choose the next support as opposed to following a pre-designed rule as in OMP. Empirical results are provided to demonstrate the superior performance of the proposed RL+MCTS algorithm over existing sparse signal recovery algorithms.
Cite
Text
Zhong et al. "Learning to Recover Sparse Signals." NeurIPS 2019 Workshops: Deep_Inverse, 2019.Markdown
[Zhong et al. "Learning to Recover Sparse Signals." NeurIPS 2019 Workshops: Deep_Inverse, 2019.](https://mlanthology.org/neuripsw/2019/zhong2019neuripsw-learning/)BibTeX
@inproceedings{zhong2019neuripsw-learning,
title = {{Learning to Recover Sparse Signals}},
author = {Zhong, Sichen and Zhao, Yue and Chen, Jianshu},
booktitle = {NeurIPS 2019 Workshops: Deep_Inverse},
year = {2019},
url = {https://mlanthology.org/neuripsw/2019/zhong2019neuripsw-learning/}
}