On Multi-Layered Connectionist Models: Adding Layers vs. Increasing Width
Abstract
In this paper, we explore the computational potential and limitations of the multi-layered connectionist models [Minsky and Papert, 1968]. We found that the number of layers and the width are two crucial parameters for the multilayered connectionist models. If each layer has the same size n and we increment the number of layers by 1, then the number of problems solved will increase 0{n 3) times. On the other hand, suppose the number of layers is equal to 2. If we increment the width by 1, then the number of problem solved will increase <D(n n) times, where n is the input size. Hence, we can extend a 2-layered connectionist model by adding layers or increasing width. Our conclusion is that increasing width is better than adding layers. 2 Layered Connectionist models A layered connectionist machine is a special case of connectionist models. It has t layers and one input layer (layer zero). The input (bottom) layer contains n input neurons. The last (top) layer contains m output neurons. Each of the remainder layers contains w neurons. For i = 0,...,t — 1, there are links connecting the neurons in layer i to the neurons in layer (i + 1); no other links exist. The following is a formal definition of layered connectionist machines. 1
Cite
Text
Ho. "On Multi-Layered Connectionist Models: Adding Layers vs. Increasing Width." International Joint Conference on Artificial Intelligence, 1989.Markdown
[Ho. "On Multi-Layered Connectionist Models: Adding Layers vs. Increasing Width." International Joint Conference on Artificial Intelligence, 1989.](https://mlanthology.org/ijcai/1989/ho1989ijcai-multi/)BibTeX
@inproceedings{ho1989ijcai-multi,
title = {{On Multi-Layered Connectionist Models: Adding Layers vs. Increasing Width}},
author = {Ho, Chung-Jen},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {1989},
pages = {176-179},
url = {https://mlanthology.org/ijcai/1989/ho1989ijcai-multi/}
}