Rectified Linear Units Improve Restricted Boltzmann Machines

Abstract

Restricted Boltzmann machines were developed using binary stochastic hidden units. These can be generalized by replacing each binary unit by an infinite number of copies that all have the same weights but have progressively more negative biases. The learning and inference rules for these "Stepped Sigmoid Units" are unchanged. They can be approximated efficiently by noisy, rectified linear units. Compared with binary units, these units learn features that are better for object recognition on the NORB dataset and face verification on the Labeled Faces in the Wild dataset. Unlike binary units, rectified linear units preserve information about relative intensities as information travels through multiple layers of feature detectors.

Cite

Text

Nair and Hinton. "Rectified Linear Units Improve Restricted Boltzmann Machines." International Conference on Machine Learning, 2010.

Markdown

[Nair and Hinton. "Rectified Linear Units Improve Restricted Boltzmann Machines." International Conference on Machine Learning, 2010.](https://mlanthology.org/icml/2010/nair2010icml-rectified/)

BibTeX

@inproceedings{nair2010icml-rectified,
  title     = {{Rectified Linear Units Improve Restricted Boltzmann Machines}},
  author    = {Nair, Vinod and Hinton, Geoffrey E.},
  booktitle = {International Conference on Machine Learning},
  year      = {2010},
  pages     = {807-814},
  url       = {https://mlanthology.org/icml/2010/nair2010icml-rectified/}
}