Recurrent Batch Normalization
Abstract
We propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneficial to batch-normalize the hidden-to-hidden transition, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classification, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and improved generalization.
Cite
Text
Cooijmans et al. "Recurrent Batch Normalization." International Conference on Learning Representations, 2017.Markdown
[Cooijmans et al. "Recurrent Batch Normalization." International Conference on Learning Representations, 2017.](https://mlanthology.org/iclr/2017/cooijmans2017iclr-recurrent/)BibTeX
@inproceedings{cooijmans2017iclr-recurrent,
title = {{Recurrent Batch Normalization}},
author = {Cooijmans, Tim and Ballas, Nicolas and Laurent, César and Gülçehre, Çaglar and Courville, Aaron C.},
booktitle = {International Conference on Learning Representations},
year = {2017},
url = {https://mlanthology.org/iclr/2017/cooijmans2017iclr-recurrent/}
}