Breaking the Activation Function Bottleneck Through Adaptive Parameterization

Abstract

Standard neural network architectures are non-linear only by virtue of a simple element-wise activation function, making them both brittle and excessively large. In this paper, we consider methods for making the feed-forward layer more flexible while preserving its basic structure. We develop simple drop-in replacements that learn to adapt their parameterization conditional on the input, thereby increasing statistical efficiency significantly. We present an adaptive LSTM that advances the state of the art for the Penn Treebank and Wikitext-2 word-modeling tasks while using fewer parameters and converging in half as many iterations.

Cite

Text

Flennerhag et al. "Breaking the Activation Function Bottleneck Through Adaptive Parameterization." Neural Information Processing Systems, 2018.

Markdown

[Flennerhag et al. "Breaking the Activation Function Bottleneck Through Adaptive Parameterization." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/flennerhag2018neurips-breaking/)

BibTeX

@inproceedings{flennerhag2018neurips-breaking,
  title     = {{Breaking the Activation Function Bottleneck Through Adaptive Parameterization}},
  author    = {Flennerhag, Sebastian and Yin, Hujun and Keane, John and Elliot, Mark},
  booktitle = {Neural Information Processing Systems},
  year      = {2018},
  pages     = {7739-7750},
  url       = {https://mlanthology.org/neurips/2018/flennerhag2018neurips-breaking/}
}