Hardness Results for General Two-Layer Neural Networks
Abstract
We deal with the problem of learning a general class of 2-layer neural networks in polynomial time. The considered neural networks consist of k linear threshold units on the hidden layer and an arbitrary binary output unit. We show NP-completeness of the consistency problem for classes that use an arbitrary set of binary output units containing a function which depends on all input dimensions. Thereby k is allowed to be polynomial in the input size. Those classes enclose a variety of multilayer neural networks like the class of multilayer feedforward threshold units. We obtain an analogous result for classes of 2-layer neural networks with any fixed nontrivial output unit. Further we present a hardness result for approximation. We prove that it is NP-hard to find a 2layer neural network of constant size with output unit PARITY that approximately (up to a constant factor) maximizes the fraction of correctly classified examples in the given training set. We further develop a general too...
Cite
Text
Kuhlmann. "Hardness Results for General Two-Layer Neural Networks." Annual Conference on Computational Learning Theory, 2000.Markdown
[Kuhlmann. "Hardness Results for General Two-Layer Neural Networks." Annual Conference on Computational Learning Theory, 2000.](https://mlanthology.org/colt/2000/kuhlmann2000colt-hardness/)BibTeX
@inproceedings{kuhlmann2000colt-hardness,
title = {{Hardness Results for General Two-Layer Neural Networks}},
author = {Kuhlmann, Christian},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2000},
pages = {275-285},
url = {https://mlanthology.org/colt/2000/kuhlmann2000colt-hardness/}
}