Tensor Switching Networks
Abstract
We present a novel neural network algorithm, the Tensor Switching (TS) network, which generalizes the Rectified Linear Unit (ReLU) nonlinearity to tensor-valued hidden units. The TS network copies its entire input vector to different locations in an expanded representation, with the location determined by its hidden unit activity. In this way, even a simple linear readout from the TS representation can implement a highly expressive deep-network-like function. The TS network hence avoids the vanishing gradient problem by construction, at the cost of larger representation size. We develop several methods to train the TS network, including equivalent kernels for infinitely wide and deep TS networks, a one-pass linear learning algorithm, and two backpropagation-inspired representation learning algorithms. Our experimental results demonstrate that the TS network is indeed more expressive and consistently learns faster than standard ReLU networks.
Cite
Text
Tsai et al. "Tensor Switching Networks." Neural Information Processing Systems, 2016.Markdown
[Tsai et al. "Tensor Switching Networks." Neural Information Processing Systems, 2016.](https://mlanthology.org/neurips/2016/tsai2016neurips-tensor/)BibTeX
@inproceedings{tsai2016neurips-tensor,
title = {{Tensor Switching Networks}},
author = {Tsai, Chuan-Yung and Saxe, Andrew M and Saxe, Andrew M and Cox, David},
booktitle = {Neural Information Processing Systems},
year = {2016},
pages = {2038-2046},
url = {https://mlanthology.org/neurips/2016/tsai2016neurips-tensor/}
}