Combining the Predictions of Multiple Classifiers: Using Competitive Learning to Initialize Neural Networks
Abstract
The primary goal of inductive learning is to generalize well -- that is, induce a function that accurately produces the correct output for future inputs. Hansen and Salamon showed that, under certain assumptions, combining the predictions of several separately trained neural networks will improve generalization. One of their key assumptions is that the individual networks should be independent in the errors they produce. In the standard way of performing backpropagation this assumption may be violated, because the standard procedure is to initialize network weights in the region of weight space near the origin. This means that backpropagation's gradient-descent search may only reach a small subset of the possible local minima. In this paper we present an approach to initializing neural networks that uses competitive learning to intelligently create networks that are originally located far from the origin of weight space, thereby potentially increasing the set of reachable local minima....
Cite
Text
Maclin and Shavlik. "Combining the Predictions of Multiple Classifiers: Using Competitive Learning to Initialize Neural Networks." International Joint Conference on Artificial Intelligence, 1995.Markdown
[Maclin and Shavlik. "Combining the Predictions of Multiple Classifiers: Using Competitive Learning to Initialize Neural Networks." International Joint Conference on Artificial Intelligence, 1995.](https://mlanthology.org/ijcai/1995/maclin1995ijcai-combining/)BibTeX
@inproceedings{maclin1995ijcai-combining,
title = {{Combining the Predictions of Multiple Classifiers: Using Competitive Learning to Initialize Neural Networks}},
author = {Maclin, Richard and Shavlik, Jude W.},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {1995},
pages = {524-531},
url = {https://mlanthology.org/ijcai/1995/maclin1995ijcai-combining/}
}