A Random Matrix Analysis of Learning with Α-Dropout
Abstract
This article studies a single hidden layer neural network with generalized Dropout (α-Dropout), where the dropped out features are replaced with an arbitrary value α. Specifically, under a large dimensional data and network regime, we provide the generalization performances for this network on a binary classification problem. We notably demonstrate that a careful choice of α different from 0 can drastically improve the generalization performances of the classifier.
Cite
Text
Seddik et al. "A Random Matrix Analysis of Learning with Α-Dropout." ICML 2020 Workshops: Artemiss, 2020.Markdown
[Seddik et al. "A Random Matrix Analysis of Learning with Α-Dropout." ICML 2020 Workshops: Artemiss, 2020.](https://mlanthology.org/icmlw/2020/seddik2020icmlw-random/)BibTeX
@inproceedings{seddik2020icmlw-random,
title = {{A Random Matrix Analysis of Learning with Α-Dropout}},
author = {Seddik, Mohamed El Amine and Couillet, Romain and Tamaazousti, Mohamed},
booktitle = {ICML 2020 Workshops: Artemiss},
year = {2020},
url = {https://mlanthology.org/icmlw/2020/seddik2020icmlw-random/}
}