Non-Convex Rank/Sparsity Regularization and Local Minima
Abstract
This paper considers the problem of recovering either a low rank matrix or a sparse vector from observations of linear combinations of the vector or matrix elements. Recent methods replace the non-convex regularization with l1 or nuclear norm relaxations. It is well known that this approach recovers near optimal solutions if a so called restricted isometry property (RIP) holds. On the other hand it also has a shrinking bias which can degrade the solution. In this paper we study an alternative non-convex regularization term that does not suffer from this bias. Our main theoretical results show that if a RIP holds then the stationary points are often well separated, in the sense that their differences must be of high cardinality/rank. Thus, with a suitable initial solution the approach is unlikely to fall into a bad local minimum. Our numerical tests show that the approach is likely to converge to a better solution than standard l1/nuclear-norm relaxation even when starting from trivial initializations. In many cases our results can also be used to verify global optimality of our method.
Cite
Text
Olsson et al. "Non-Convex Rank/Sparsity Regularization and Local Minima." International Conference on Computer Vision, 2017. doi:10.1109/ICCV.2017.44Markdown
[Olsson et al. "Non-Convex Rank/Sparsity Regularization and Local Minima." International Conference on Computer Vision, 2017.](https://mlanthology.org/iccv/2017/olsson2017iccv-nonconvex/) doi:10.1109/ICCV.2017.44BibTeX
@inproceedings{olsson2017iccv-nonconvex,
title = {{Non-Convex Rank/Sparsity Regularization and Local Minima}},
author = {Olsson, Carl and Carlsson, Marcus and Andersson, Fredrik and Larsson, Viktor},
booktitle = {International Conference on Computer Vision},
year = {2017},
doi = {10.1109/ICCV.2017.44},
url = {https://mlanthology.org/iccv/2017/olsson2017iccv-nonconvex/}
}