Beyond Lazy Training for Over-Parameterized Tensor Decomposition

Abstract

Over-parametrization is an important technique in training neural networks. In both theory and practice, training a larger network allows the optimization algorithm to avoid bad local optimal solutions. In this paper we study a closely related tensor decomposition problem: given an $l$-th order tensor in $(R^d)^{\otimes l}$ of rank $r$ (where $r\ll d$), can variants of gradient descent find a rank $m$ decomposition where $m > r$? We show that in a lazy training regime (similar to the NTK regime for neural networks) one needs at least $m = \Omega(d^{l-1})$, while a variant of gradient descent can find an approximate tensor when $m = O^*(r^{2.5l}\log d)$. Our results show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.

Cite

Text

Wang et al. "Beyond Lazy Training for Over-Parameterized Tensor Decomposition." Neural Information Processing Systems, 2020.

Markdown

[Wang et al. "Beyond Lazy Training for Over-Parameterized Tensor Decomposition." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/wang2020neurips-beyond/)

BibTeX

@inproceedings{wang2020neurips-beyond,
  title     = {{Beyond Lazy Training for Over-Parameterized Tensor Decomposition}},
  author    = {Wang, Xiang and Wu, Chenwei and Lee, Jason and Ma, Tengyu and Ge, Rong},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/wang2020neurips-beyond/}
}