Learning a Continuous Hidden Variable Model for Binary Data
Abstract
A directed generative model for binary data using a small number of hidden continuous units is investigated. A clipping nonlinear(cid:173) ity distinguishes the model from conventional principal components analysis. The relationships between the correlations of the underly(cid:173) ing continuous Gaussian variables and the binary output variables are utilized to learn the appropriate weights of the network. The advantages of this approach are illustrated on a translationally in(cid:173) variant binary distribution and on handwritten digit images.
Cite
Text
Lee and Sompolinsky. "Learning a Continuous Hidden Variable Model for Binary Data." Neural Information Processing Systems, 1998.Markdown
[Lee and Sompolinsky. "Learning a Continuous Hidden Variable Model for Binary Data." Neural Information Processing Systems, 1998.](https://mlanthology.org/neurips/1998/lee1998neurips-learning/)BibTeX
@inproceedings{lee1998neurips-learning,
title = {{Learning a Continuous Hidden Variable Model for Binary Data}},
author = {Lee, Daniel D. and Sompolinsky, Haim},
booktitle = {Neural Information Processing Systems},
year = {1998},
pages = {515-521},
url = {https://mlanthology.org/neurips/1998/lee1998neurips-learning/}
}