On Learning Binary Weights for Majority Functions

Abstract

We investigate algorithms for learning binary weights from examples of majority functions of a set of literals. In particular, given a set of (randomly drawn) input-output pairs, with inputs being binary ±1 vectors, and the outputs likewise being ±1 classifications, we seek to find a vector of binary (±1) weights for a linear threshold element (or formal neuron) which provides a linearly separable hypothesis consistent on the set of examples. We present three algorithms–Directed Drift, Harmonic Update, and Majority Rule–for learning binary weights in this context, and examine their characteristics. In particular, we formally define a distribution dependent notion of algorithmic capacity (which is related to the distibution free notion of the VC dimension) and provide estimates of the capacity of the proposed algorithms.

Cite

Text

Venkatesh. "On Learning Binary Weights for Majority Functions." Annual Conference on Computational Learning Theory, 1991. doi:10.1016/B978-1-55860-213-7.50027-4

Markdown

[Venkatesh. "On Learning Binary Weights for Majority Functions." Annual Conference on Computational Learning Theory, 1991.](https://mlanthology.org/colt/1991/venkatesh1991colt-learning/) doi:10.1016/B978-1-55860-213-7.50027-4

BibTeX

@inproceedings{venkatesh1991colt-learning,
  title     = {{On Learning Binary Weights for Majority Functions}},
  author    = {Venkatesh, Santosh S.},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {1991},
  pages     = {257-266},
  doi       = {10.1016/B978-1-55860-213-7.50027-4},
  url       = {https://mlanthology.org/colt/1991/venkatesh1991colt-learning/}
}