Scalable and Sound Low-Rank Tensor Learning

Abstract

Many real-world data arise naturally as tensors. Equipped with a low rank prior, learning algorithms can benefit from exploiting the rich dependency encoded in a tensor. Despite its prevalence in low-rank matrix learning, trace norm ceases to be tractable in tensors and therefore most existing works resort to matrix unfolding. Although some theoretical guarantees are available, these approaches may lose valuable structure information and are not scalable in general. To address this problem, we propose directly optimizing the tensor trace norm by approximating its dual spectral norm, and we show that the approximation bounds can be efficiently converted to the original problem via the generalized conditional gradient algorithm. The resulting approach is scalable to large datasets, and matches state-of-the-art recovery guarantees. Experimental results on tensor completion and multitask learning confirm the superiority of the proposed method.

Cite

Text

Cheng et al. "Scalable and Sound Low-Rank Tensor Learning." International Conference on Artificial Intelligence and Statistics, 2016.

Markdown

[Cheng et al. "Scalable and Sound Low-Rank Tensor Learning." International Conference on Artificial Intelligence and Statistics, 2016.](https://mlanthology.org/aistats/2016/cheng2016aistats-scalable/)

BibTeX

@inproceedings{cheng2016aistats-scalable,
  title     = {{Scalable and Sound Low-Rank Tensor Learning}},
  author    = {Cheng, Hao and Yu, Yaoliang and Zhang, Xinhua and Xing, Eric P. and Schuurmans, Dale},
  booktitle = {International Conference on Artificial Intelligence and Statistics},
  year      = {2016},
  pages     = {1114-1123},
  url       = {https://mlanthology.org/aistats/2016/cheng2016aistats-scalable/}
}