Hierarchical Nucleation in Deep Neural Networks
Abstract
Deep convolutional networks (DCNs) learn meaningful representations where data that share the same abstract characteristics are positioned closer and closer. Understanding these representations and how they are generated is of unquestioned practical and theoretical interest. In this work we study the evolution of the probability density of the ImageNet dataset across the hidden layers in some state-of-the-art DCNs. We find that the initial layers generate a unimodal probability density getting rid of any structure irrelevant for classification. In subsequent layers density peaks arise in a hierarchical fashion that mirrors the semantic hierarchy of the concepts. Density peaks corresponding to single categories appear only close to the output and via a very sharp transition which resembles the nucleation process of a heterogeneous liquid. This process leaves a footprint in the probability density of the output layer where the topography of the peaks allows reconstructing the semantic relationships of the categories.
Cite
Text
Doimo et al. "Hierarchical Nucleation in Deep Neural Networks." Neural Information Processing Systems, 2020.Markdown
[Doimo et al. "Hierarchical Nucleation in Deep Neural Networks." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/doimo2020neurips-hierarchical/)BibTeX
@inproceedings{doimo2020neurips-hierarchical,
title = {{Hierarchical Nucleation in Deep Neural Networks}},
author = {Doimo, Diego and Glielmo, Aldo and Ansuini, Alessio and Laio, Alessandro},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/doimo2020neurips-hierarchical/}
}