Two SVDs Produce More Focal Deep Learning Representations
Abstract
A key characteristic of work on deep learning and neural networks in general is that it relies on representations of the input that support generalization, robust inference, domain adaptation and other desirable functionalities. Much recent progress in the field has focused on efficient and effective methods for computing representations. In this paper, we propose an alternative method that is more efficient than prior work and produces representations that have a property we call focality -- a property we hypothesize to be important for neural network representations. The method consists of a simple application of two consecutive SVDs and is inspired by Anandkumar (2012).
Cite
Text
Schütze and Scheible. "Two SVDs Produce More Focal Deep Learning Representations." International Conference on Learning Representations, 2013.Markdown
[Schütze and Scheible. "Two SVDs Produce More Focal Deep Learning Representations." International Conference on Learning Representations, 2013.](https://mlanthology.org/iclr/2013/schutze2013iclr-two/)BibTeX
@inproceedings{schutze2013iclr-two,
title = {{Two SVDs Produce More Focal Deep Learning Representations}},
author = {Schütze, Hinrich and Scheible, Christian},
booktitle = {International Conference on Learning Representations},
year = {2013},
url = {https://mlanthology.org/iclr/2013/schutze2013iclr-two/}
}