Complexity of Network Training for Classes of Neural Networks

Abstract

It is known that the problem of training certain specific, very simple neural networks is NP-complete. While such results suggest that training is equally hard for larger, as well as differently configured networks, this conclusion is by no means self-evident. The main result of this paper is that it is NP-complete to train any specific architecture or class of architectures. On the other hand, it is also shown that a simple 4-node network (with two hidden and two output units) can be trained in polynomial time if the target network function is assumed to be surjective. This remains true for networks of any size, on the condition that the number of ouput units is at least equal to the number of units in the first computing layer. Thus, with a mild restriction placed on the class of target functions, training certain large networks may be easier than training smaller ones.

Cite

Text

Pinter. "Complexity of Network Training for Classes of Neural Networks." International Conference on Algorithmic Learning Theory, 1995. doi:10.1007/3-540-60454-5_40

Markdown

[Pinter. "Complexity of Network Training for Classes of Neural Networks." International Conference on Algorithmic Learning Theory, 1995.](https://mlanthology.org/alt/1995/pinter1995alt-complexity/) doi:10.1007/3-540-60454-5_40

BibTeX

@inproceedings{pinter1995alt-complexity,
  title     = {{Complexity of Network Training for Classes of Neural Networks}},
  author    = {Pinter, Charles C.},
  booktitle = {International Conference on Algorithmic Learning Theory},
  year      = {1995},
  pages     = {215-227},
  doi       = {10.1007/3-540-60454-5_40},
  url       = {https://mlanthology.org/alt/1995/pinter1995alt-complexity/}
}