Intraclass Clustering: An Implicit Learning Ability That Regularizes DNNs

Abstract

Several works have shown that the regularization mechanisms underlying deep neural networks' generalization performances are still poorly understood. In this paper, we hypothesize that deep neural networks are regularized through their ability to extract meaningful clusters among the samples of a class. This constitutes an implicit form of regularization, as no explicit training mechanisms or supervision target such behaviour. To support our hypothesis, we design four different measures of intraclass clustering, based on the neuron- and layer-level representations of the training data. We then show that these measures constitute accurate predictors of generalization performance across variations of a large set of hyperparameters (learning rate, batch size, optimizer, weight decay, dropout rate, data augmentation, network depth and width).

Cite

Text

Carbonnelle and De Vleeschouwer. "Intraclass Clustering: An Implicit Learning Ability That Regularizes DNNs." International Conference on Learning Representations, 2021.

Markdown

[Carbonnelle and De Vleeschouwer. "Intraclass Clustering: An Implicit Learning Ability That Regularizes DNNs." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/carbonnelle2021iclr-intraclass/)

BibTeX

@inproceedings{carbonnelle2021iclr-intraclass,
  title     = {{Intraclass Clustering: An Implicit Learning Ability That Regularizes DNNs}},
  author    = {Carbonnelle, Simon and De Vleeschouwer, Christophe},
  booktitle = {International Conference on Learning Representations},
  year      = {2021},
  url       = {https://mlanthology.org/iclr/2021/carbonnelle2021iclr-intraclass/}
}