Learning Activation Functions to Improve Deep Neural Networks
Abstract
Artificial neural networks typically have a fixed, non-linear activation function at each neuron. We have designed a novel form of piecewise linear activation function that is learned independently for each neuron using gradient descent. With this adaptive activation function, we are able to improve upon deep neural network architectures composed of static rectified linear units, achieving state-of-the-art performance on CIFAR-10 (7.51%), CIFAR-100 (30.83%), and a benchmark from high-energy physics involving Higgs boson decay modes.
Cite
Text
Agostinelli et al. "Learning Activation Functions to Improve Deep Neural Networks." International Conference on Learning Representations, 2015.Markdown
[Agostinelli et al. "Learning Activation Functions to Improve Deep Neural Networks." International Conference on Learning Representations, 2015.](https://mlanthology.org/iclr/2015/agostinelli2015iclr-learning/)BibTeX
@inproceedings{agostinelli2015iclr-learning,
title = {{Learning Activation Functions to Improve Deep Neural Networks}},
author = {Agostinelli, Forest and Hoffman, Matthew D. and Sadowski, Peter J. and Baldi, Pierre},
booktitle = {International Conference on Learning Representations},
year = {2015},
url = {https://mlanthology.org/iclr/2015/agostinelli2015iclr-learning/}
}