Max vs Min: Tensor Decomposition and ICA with Nearly Linear Sample Complexity

Abstract

We present a simple, general technique for reducing the sample complexity of matrix and tensor decomposition algorithms applied to distributions. We use the technique to give a polynomial-time algorithm for standard ICA with sample complexity nearly linear in the dimension, thereby improving substantially on previous bounds. The analysis is based on properties of random polynomials, namely the spacings of an ensemble of polynomials. Our technique also applies to other applications of tensor decompositions, including spherical Gaussian mixture models.

Cite

Text

Vempala and Xiao. "Max vs Min: Tensor Decomposition and ICA with Nearly Linear Sample Complexity." Annual Conference on Computational Learning Theory, 2015.

Markdown

[Vempala and Xiao. "Max vs Min: Tensor Decomposition and ICA with Nearly Linear Sample Complexity." Annual Conference on Computational Learning Theory, 2015.](https://mlanthology.org/colt/2015/vempala2015colt-max/)

BibTeX

@inproceedings{vempala2015colt-max,
  title     = {{Max vs Min: Tensor Decomposition and ICA with Nearly Linear Sample Complexity}},
  author    = {Vempala, Santosh S. and Xiao, Ying},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2015},
  pages     = {1710-1723},
  url       = {https://mlanthology.org/colt/2015/vempala2015colt-max/}
}