Estimation of (near) Low-Rank Matrices with Noise and High-Dimensional Scaling

Abstract

We study an instance of high-dimensional statistical inference in which the goal is to use N noisy observations to estimate a matrix Θ+ ∈ ℝk×p that is assumed to be either exactly low rank, or "near" low-rank, meaning that it can be well-approximated by a matrix with low rank. We consider an M-estimator based on regularization by the trace or nuclear norm over matrices, and analyze its performance under high-dimensional scaling. We provide non-asymptotic bounds on the Frobenius norm error that hold for a general class of noisy observation models, and apply to both exactly low-rank and approximately low-rank matrices. We then illustrate their consequences for a number of specific learning models, including low-rank multivariate or multi-task regression, system identification in vector autoregressive processes, and recovery of low-rank matrices from random projections. Simulations show excellent agreement with the high-dimensional scaling of the error predicted by our theory.

Cite

Text

Negahban and Wainwright. "Estimation of (near) Low-Rank Matrices with Noise and High-Dimensional Scaling." International Conference on Machine Learning, 2010. doi:10.1214/10-AOS850

Markdown

[Negahban and Wainwright. "Estimation of (near) Low-Rank Matrices with Noise and High-Dimensional Scaling." International Conference on Machine Learning, 2010.](https://mlanthology.org/icml/2010/negahban2010icml-estimation/) doi:10.1214/10-AOS850

BibTeX

@inproceedings{negahban2010icml-estimation,
  title     = {{Estimation of (near) Low-Rank Matrices with Noise and High-Dimensional Scaling}},
  author    = {Negahban, Sahand N. and Wainwright, Martin J.},
  booktitle = {International Conference on Machine Learning},
  year      = {2010},
  pages     = {823-830},
  doi       = {10.1214/10-AOS850},
  url       = {https://mlanthology.org/icml/2010/negahban2010icml-estimation/}
}