Supervised Learning with Tensor Networks

Abstract

Tensor networks are approximations of high-order tensors which are efficient to work with and have been very successful for physics and mathematics applications. We demonstrate how algorithms for optimizing tensor networks can be adapted to supervised learning tasks by using matrix product states (tensor trains) to parameterize non-linear kernel learning models. For the MNIST data set we obtain less than 1% test set classification error. We discuss an interpretation of the additional structure imparted by the tensor network to the learned model.

Cite

Text

Stoudenmire and Schwab. "Supervised Learning with Tensor Networks." Neural Information Processing Systems, 2016.

Markdown

[Stoudenmire and Schwab. "Supervised Learning with Tensor Networks." Neural Information Processing Systems, 2016.](https://mlanthology.org/neurips/2016/stoudenmire2016neurips-supervised/)

BibTeX

@inproceedings{stoudenmire2016neurips-supervised,
  title     = {{Supervised Learning with Tensor Networks}},
  author    = {Stoudenmire, Edwin and Schwab, David J},
  booktitle = {Neural Information Processing Systems},
  year      = {2016},
  pages     = {4799-4807},
  url       = {https://mlanthology.org/neurips/2016/stoudenmire2016neurips-supervised/}
}