Kernelized Synaptic Weight Matrices

Abstract

In this paper we introduce a novel neural network architecture, in which weight matrices are re-parametrized in terms of low-dimensional vectors, interacting through kernel functions. A layer of our network can be interpreted as introducing a (potentially infinitely wide) linear layer between input and output. We describe the theory underpinning this model and validate it with concrete examples, exploring how it can be used to impose structure on neural networks in diverse applications ranging from data visualization to recommender systems. We achieve state-of-the-art performance in a collaborative filtering task (MovieLens).

Cite

Text

Muller et al. "Kernelized Synaptic Weight Matrices." International Conference on Machine Learning, 2018.

Markdown

[Muller et al. "Kernelized Synaptic Weight Matrices." International Conference on Machine Learning, 2018.](https://mlanthology.org/icml/2018/muller2018icml-kernelized/)

BibTeX

@inproceedings{muller2018icml-kernelized,
  title     = {{Kernelized Synaptic Weight Matrices}},
  author    = {Muller, Lorenz and Martel, Julien and Indiveri, Giacomo},
  booktitle = {International Conference on Machine Learning},
  year      = {2018},
  pages     = {3654-3663},
  volume    = {80},
  url       = {https://mlanthology.org/icml/2018/muller2018icml-kernelized/}
}