On Approximate Learning by Multi-Layered Feedforward Circuits
Abstract
We consider the problem of efficient approximate learning by multi-layered feedforward circuits subject to two objective functions. First, we consider the objective to maximize the ratio of correctly classified points compared to the training set size (e.g., see [ 3 ],[ 5 ]). We show that for single hidden layer threshold circuits with n hidden nodes and varying input dimension, approximation of this ratio within a relative error c/n3 , for some positive constant c, is NP-hard even if the number of examples is limited with respect to n . For architectures with two hidden nodes (e.g., as in [ 6 ]), approximating the objective within some fixed factor is NP-hard even if any sigmoid-like activation function in the hidden layer and å-separation of the output [ 19 ] is considered, or if the semilinear activation function substitutes the threshold function. Next, we consider the objective to minimize the failure ratio [ 2 ]. We show that it is NP-hard to approximate the failure ratio within every constant larger than 1 for a multilayered threshold circuit provided the input biases are zero. Furthermore, even weak approximation of this objective is almost NP-hard.
Cite
Text
DasGupta and Hammer. "On Approximate Learning by Multi-Layered Feedforward Circuits." International Conference on Algorithmic Learning Theory, 2000. doi:10.1007/3-540-40992-0_20Markdown
[DasGupta and Hammer. "On Approximate Learning by Multi-Layered Feedforward Circuits." International Conference on Algorithmic Learning Theory, 2000.](https://mlanthology.org/alt/2000/dasgupta2000alt-approximate/) doi:10.1007/3-540-40992-0_20BibTeX
@inproceedings{dasgupta2000alt-approximate,
title = {{On Approximate Learning by Multi-Layered Feedforward Circuits}},
author = {DasGupta, Bhaskar and Hammer, Barbara},
booktitle = {International Conference on Algorithmic Learning Theory},
year = {2000},
pages = {264-278},
doi = {10.1007/3-540-40992-0_20},
url = {https://mlanthology.org/alt/2000/dasgupta2000alt-approximate/}
}