Recovering a Feed-Forward Net from Its Output
Abstract
We study feed-forward nets with arbitrarily many layers, using the stan(cid:173) dard sigmoid, tanh x. Aside from technicalities, our theorems are: 1. Complete knowledge of the output of a neural net for arbitrary inputs uniquely specifies the architecture, weights and thresholds; and 2. There are only finitely many critical points on the error surface for a generic training problem.
Cite
Text
Fefferman and Markel. "Recovering a Feed-Forward Net from Its Output." Neural Information Processing Systems, 1993.Markdown
[Fefferman and Markel. "Recovering a Feed-Forward Net from Its Output." Neural Information Processing Systems, 1993.](https://mlanthology.org/neurips/1993/fefferman1993neurips-recovering/)BibTeX
@inproceedings{fefferman1993neurips-recovering,
title = {{Recovering a Feed-Forward Net from Its Output}},
author = {Fefferman, Charles and Markel, Scott},
booktitle = {Neural Information Processing Systems},
year = {1993},
pages = {335-342},
url = {https://mlanthology.org/neurips/1993/fefferman1993neurips-recovering/}
}