Layer-Wise Analysis of Deep Networks with Gaussian Kernels
Abstract
Deep networks can potentially express a learning problem more efficiently than local learning machines. While deep networks outperform local learning machines on some problems, it is still unclear how their nice representation emerges from their complex structure. We present an analysis based on Gaussian kernels that measures how the representation of the learning problem evolves layer after layer as the deep network builds higher-level abstract representations of the input. We use this analysis to show empirically that deep networks build progressively better representations of the learning problem and that the best representations are obtained when the deep network discriminates only in the last layers.
Cite
Text
Montavon et al. "Layer-Wise Analysis of Deep Networks with Gaussian Kernels." Neural Information Processing Systems, 2010.Markdown
[Montavon et al. "Layer-Wise Analysis of Deep Networks with Gaussian Kernels." Neural Information Processing Systems, 2010.](https://mlanthology.org/neurips/2010/montavon2010neurips-layerwise/)BibTeX
@inproceedings{montavon2010neurips-layerwise,
title = {{Layer-Wise Analysis of Deep Networks with Gaussian Kernels}},
author = {Montavon, Grégoire and Müller, Klaus-Robert and Braun, Mikio L.},
booktitle = {Neural Information Processing Systems},
year = {2010},
pages = {1678-1686},
url = {https://mlanthology.org/neurips/2010/montavon2010neurips-layerwise/}
}