On the Stability of Deep Networks
Abstract
In this work we study the properties of deep neural networks (DNN) with random weights. We formally prove that these networks perform a distance-preserving embedding of the data. Based on this we then draw conclusions on the size of the training data and the networks' structure. A longer version of this paper with more results and details can be found in (Giryes et al., 2015). In particular, we formally prove in the longer version that DNN with random Gaussian weights perform a distance-preserving embedding of the data, with a special treatment for in-class and out-of-class data.
Cite
Text
Giryes et al. "On the Stability of Deep Networks." International Conference on Learning Representations, 2015.Markdown
[Giryes et al. "On the Stability of Deep Networks." International Conference on Learning Representations, 2015.](https://mlanthology.org/iclr/2015/giryes2015iclr-stability/)BibTeX
@inproceedings{giryes2015iclr-stability,
title = {{On the Stability of Deep Networks}},
author = {Giryes, Raja and Sapiro, Guillermo and Bronstein, Alexander M.},
booktitle = {International Conference on Learning Representations},
year = {2015},
url = {https://mlanthology.org/iclr/2015/giryes2015iclr-stability/}
}