Complexity Issues in Learning by Neural Nets
Abstract
We consider the computational complexity of learning by neural nets. We are interested in how hard it is to design appropriate neural net architectures and to train neural nets for general and specialized learning tasks. We introduce a neural net learning model and classify several neural net design and optimization problems within the polynomial-time hierarchy. We also investigate how much easier the problems become if the class of concepts to be learned is known a priori. We show that the training problem for 2-cascade neural nets (which have only one hidden unit) is NP-complete, which implies that finding an optimum net to load a set of examples is also NP-complete. We conjecture that training a k-cascade neural net, which is a classical threshold network training problem, is also NP-complete, for each k ≥ 2.
Cite
Text
Lin and Vitter. "Complexity Issues in Learning by Neural Nets." Annual Conference on Computational Learning Theory, 1989. doi:10.1016/b978-0-08-094829-4.50011-8Markdown
[Lin and Vitter. "Complexity Issues in Learning by Neural Nets." Annual Conference on Computational Learning Theory, 1989.](https://mlanthology.org/colt/1989/lin1989colt-complexity/) doi:10.1016/b978-0-08-094829-4.50011-8BibTeX
@inproceedings{lin1989colt-complexity,
title = {{Complexity Issues in Learning by Neural Nets}},
author = {Lin, Jyh-Han and Vitter, Jeffrey Scott},
booktitle = {Annual Conference on Computational Learning Theory},
year = {1989},
pages = {118-133},
doi = {10.1016/b978-0-08-094829-4.50011-8},
url = {https://mlanthology.org/colt/1989/lin1989colt-complexity/}
}