Learning by a Population of Perceptrons
Abstract
We study learning from example~ by a population of neural networks. A group of single layer perceptions with discrete weights learn from a two-layer neural network. Each member is trained independently from the same or independent example sets. They vote for an answer of new problems. We calculate the generalization performance of the group decision by majority vote. The generalization error decreases to a minimum at a certain number of examples and increases again.
Cite
Text
Kang and Oh. "Learning by a Population of Perceptrons." Annual Conference on Computational Learning Theory, 1995. doi:10.1145/225298.225334Markdown
[Kang and Oh. "Learning by a Population of Perceptrons." Annual Conference on Computational Learning Theory, 1995.](https://mlanthology.org/colt/1995/kang1995colt-learning/) doi:10.1145/225298.225334BibTeX
@inproceedings{kang1995colt-learning,
title = {{Learning by a Population of Perceptrons}},
author = {Kang, Kukjin and Oh, Jong-Hoon},
booktitle = {Annual Conference on Computational Learning Theory},
year = {1995},
pages = {297-300},
doi = {10.1145/225298.225334},
url = {https://mlanthology.org/colt/1995/kang1995colt-learning/}
}