On the Power of Neural Networks for Solving Hard Problems
Abstract
This paper deals with a neural network model in which each neuron performs a threshold logic function. An important property of the model is that it always converges to a stable state when operating in a serial mode [2,5]. This property is the basis of the potential applications of the model such as associative memory devices and combinatorial optimization [3,6]. One of the motivations for use of the model for solving hard combinatorial problems is the fact that it can be implemented by optical devices and thus operate at a higher speed than conventional electronics. The main theme in this work is to investigate the power of the model for solving NP-hard problems [4,8], and to understand the relation between speed of operation and the size of a neural network. In particular, it will be shown that for any NP-hard problem the existence of a polynomial size network that solves it implies that NP=co-NP. Also, for Traveling Salesman Problem (TSP), even a polynomial size network that gets an €-approximate solution does not exist unless P=NP.
Cite
Text
Bruck and Goodman. "On the Power of Neural Networks for Solving Hard Problems." Neural Information Processing Systems, 1987.Markdown
[Bruck and Goodman. "On the Power of Neural Networks for Solving Hard Problems." Neural Information Processing Systems, 1987.](https://mlanthology.org/neurips/1987/bruck1987neurips-power/)BibTeX
@inproceedings{bruck1987neurips-power,
title = {{On the Power of Neural Networks for Solving Hard Problems}},
author = {Bruck, Jehoshua and Goodman, Joseph W.},
booktitle = {Neural Information Processing Systems},
year = {1987},
pages = {137-143},
url = {https://mlanthology.org/neurips/1987/bruck1987neurips-power/}
}