A Neural Network Approach to Constructive Induction
Abstract
Determining the appropriate size of an artificial neural network for a given inductive learning problem at hand has been more of an art than a science. Human intervention through educated guess is often needed in the otherwise automatic learning process. We address this issue by formulating the problem as a systematic search in the space of functions which corresponds to a subclass of multilayer feedforward networks. Learning is thus a dynamic network construction process which involves adjusting the network weights as well as the topology. Adding new hidden units corresponds to extracting new features from the input attributes for reducing the residual classification errors. We argue that the learning process takes advantage of the transfer effects of prior learning in constructing large networks from smaller ones. The paper also includes empirical results of several inductive learning problems.
Cite
Text
Yeung. "A Neural Network Approach to Constructive Induction." International Conference on Machine Learning, 1991. doi:10.1016/B978-1-55860-200-7.50049-0Markdown
[Yeung. "A Neural Network Approach to Constructive Induction." International Conference on Machine Learning, 1991.](https://mlanthology.org/icml/1991/yeung1991icml-neural/) doi:10.1016/B978-1-55860-200-7.50049-0BibTeX
@inproceedings{yeung1991icml-neural,
title = {{A Neural Network Approach to Constructive Induction}},
author = {Yeung, Dit-Yan},
booktitle = {International Conference on Machine Learning},
year = {1991},
pages = {228-232},
doi = {10.1016/B978-1-55860-200-7.50049-0},
url = {https://mlanthology.org/icml/1991/yeung1991icml-neural/}
}