Minimizing the Quadratic Training Error of a Sigmoid Neuron Is Hard
Abstract
We first present a brief survey of hardness results for training feedforward neural networks. These results are then completed by the proof that the simplest architecture containing only a single neuron that applies the standard (logistic) activation function to the weighted sum of n inputs is hard to train. In particular, the problem of finding the weights of such a unit that minimize the relative quadratic training error within 1 or its average (over a training set) within 13/(31 n ) of its in.mum proves to be NP-hard. Hence, the well-known back-propagation learning algorithm appears to be not e.cient even for one neuron which has negative consequences in constructive learning.
Cite
Text
Síma. "Minimizing the Quadratic Training Error of a Sigmoid Neuron Is Hard." International Conference on Algorithmic Learning Theory, 2001. doi:10.1007/3-540-45583-3_9Markdown
[Síma. "Minimizing the Quadratic Training Error of a Sigmoid Neuron Is Hard." International Conference on Algorithmic Learning Theory, 2001.](https://mlanthology.org/alt/2001/sima2001alt-minimizing/) doi:10.1007/3-540-45583-3_9BibTeX
@inproceedings{sima2001alt-minimizing,
title = {{Minimizing the Quadratic Training Error of a Sigmoid Neuron Is Hard}},
author = {Síma, Jirí},
booktitle = {International Conference on Algorithmic Learning Theory},
year = {2001},
pages = {92-105},
doi = {10.1007/3-540-45583-3_9},
url = {https://mlanthology.org/alt/2001/sima2001alt-minimizing/}
}