Learning Stochastic Perceptrons Under K-Blocking Distributions
Abstract
We present a statistical method that PAC learns the class of stochastic perceptrons with arbitrary monotonic activation func(cid:173) tion and weights Wi E -I, 0, + I when the probability distribution that generates the input examples is member of a family that we call k-blocking distributions. Such distributions represent an impor(cid:173) tant step beyond the case where each input variable is statistically independent since the 2k-blocking family contains all the Markov distributions of order k. By stochastic percept ron we mean a per(cid:173) ceptron which, upon presentation of input vector x, outputs 1 with probability fCLJi WiXi - B). Because the same algorithm works for any monotonic (nondecreasing or nonincreasing) activation func(cid:173) tion f on Boolean domain, it handles the well studied cases of sigmolds and the "usual" radial basis functions.
Cite
Text
Marchand and Hadjifaradji. "Learning Stochastic Perceptrons Under K-Blocking Distributions." Neural Information Processing Systems, 1994.Markdown
[Marchand and Hadjifaradji. "Learning Stochastic Perceptrons Under K-Blocking Distributions." Neural Information Processing Systems, 1994.](https://mlanthology.org/neurips/1994/marchand1994neurips-learning/)BibTeX
@inproceedings{marchand1994neurips-learning,
title = {{Learning Stochastic Perceptrons Under K-Blocking Distributions}},
author = {Marchand, Mario and Hadjifaradji, Saeed},
booktitle = {Neural Information Processing Systems},
year = {1994},
pages = {279-286},
url = {https://mlanthology.org/neurips/1994/marchand1994neurips-learning/}
}