Techniques for Learning Binary Stochastic Feedforward Neural Networks
Abstract
Stochastic binary hidden units in a multi-layer perceptron (MLP) network give at least three potential benefits when compared to deterministic MLP networks. (1) They allow to learn one-to-many type of mappings. (2) They can be used in structured prediction problems, where modeling the internal structure of the output is important. (3) Stochasticity has been shown to be an excellent regularizer, which makes generalization performance potentially better in general. However, training stochastic networks is considerably more difficult. We study training using M samples of hidden activations per input. We show that the case M=1 leads to a fundamentally different behavior where the network tries to avoid stochasticity. We propose two new estimators for the training gradient and propose benchmark tests for comparing training algorithms. Our experiments confirm that training stochastic networks is difficult and show that the proposed two estimators perform favorably among all the five known estimators.
Cite
Text
Raiko et al. "Techniques for Learning Binary Stochastic Feedforward Neural Networks." International Conference on Learning Representations, 2015.Markdown
[Raiko et al. "Techniques for Learning Binary Stochastic Feedforward Neural Networks." International Conference on Learning Representations, 2015.](https://mlanthology.org/iclr/2015/raiko2015iclr-techniques/)BibTeX
@inproceedings{raiko2015iclr-techniques,
title = {{Techniques for Learning Binary Stochastic Feedforward Neural Networks}},
author = {Raiko, Tapani and Berglund, Mathias and Alain, Guillaume and Dinh, Laurent},
booktitle = {International Conference on Learning Representations},
year = {2015},
url = {https://mlanthology.org/iclr/2015/raiko2015iclr-techniques/}
}