More Efficient Sampling for Tensor Decomposition with Worst-Case Guarantees

Abstract

Recent papers have developed alternating least squares (ALS) methods for CP and tensor ring decomposition with a per-iteration cost which is sublinear in the number of input tensor entries for low-rank decomposition. However, the per-iteration cost of these methods still has an exponential dependence on the number of tensor modes when parameters are chosen to achieve certain worst-case guarantees. In this paper, we propose sampling-based ALS methods for the CP and tensor ring decompositions whose cost does not have this exponential dependence, thereby significantly improving on the previous state-of-the-art. We provide a detailed theoretical analysis and also apply the methods in a feature extraction experiment.

Cite

Text

Malik. "More Efficient Sampling for Tensor Decomposition with Worst-Case Guarantees." International Conference on Machine Learning, 2022.

Markdown

[Malik. "More Efficient Sampling for Tensor Decomposition with Worst-Case Guarantees." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/malik2022icml-more/)

BibTeX

@inproceedings{malik2022icml-more,
  title     = {{More Efficient Sampling for Tensor Decomposition with Worst-Case Guarantees}},
  author    = {Malik, Osman Asif},
  booktitle = {International Conference on Machine Learning},
  year      = {2022},
  pages     = {14887-14917},
  volume    = {162},
  url       = {https://mlanthology.org/icml/2022/malik2022icml-more/}
}