Dropout Training as Adaptive Regularization
Abstract
Dropout and other feature noising schemes control overfitting by artificially corrupting the training data. For generalized linear models, dropout performs a form of adaptive regularization. Using this viewpoint, we show that the dropout regularizer is first-order equivalent to an $\LII$ regularizer applied after scaling the features by an estimate of the inverse diagonal Fisher information matrix. We also establish a connection to AdaGrad, an online learner, and find that a close relative of AdaGrad operates by repeatedly solving linear dropout-regularized problems. By casting dropout as regularization, we develop a natural semi-supervised algorithm that uses unlabeled data to create a better adaptive regularizer. We apply this idea to document classification tasks, and show that it consistently boosts the performance of dropout training, improving on state-of-the-art results on the IMDB reviews dataset.
Cite
Text
Wager et al. "Dropout Training as Adaptive Regularization." Neural Information Processing Systems, 2013.Markdown
[Wager et al. "Dropout Training as Adaptive Regularization." Neural Information Processing Systems, 2013.](https://mlanthology.org/neurips/2013/wager2013neurips-dropout/)BibTeX
@inproceedings{wager2013neurips-dropout,
title = {{Dropout Training as Adaptive Regularization}},
author = {Wager, Stefan and Wang, Sida and Liang, Percy},
booktitle = {Neural Information Processing Systems},
year = {2013},
pages = {351-359},
url = {https://mlanthology.org/neurips/2013/wager2013neurips-dropout/}
}