Mode Normalization
Abstract
Normalization methods are a central building block in the deep learning toolbox. They accelerate and stabilize training, while decreasing the dependence on manually tuned learning rate schedules. When learning from multi-modal distributions, the effectiveness of batch normalization (BN), arguably the most prominent normalization method, is reduced. As a remedy, we propose a more flexible approach: by extending the normalization to more than a single mean and variance, we detect modes of data on-the-fly, jointly normalizing samples that share common features. We demonstrate that our method outperforms BN and other widely used normalization techniques in several experiments, including single and multi-task datasets.
Cite
Text
Deecke et al. "Mode Normalization." International Conference on Learning Representations, 2019.Markdown
[Deecke et al. "Mode Normalization." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/deecke2019iclr-mode/)BibTeX
@inproceedings{deecke2019iclr-mode,
title = {{Mode Normalization}},
author = {Deecke, Lucas and Murray, Iain and Bilen, Hakan},
booktitle = {International Conference on Learning Representations},
year = {2019},
url = {https://mlanthology.org/iclr/2019/deecke2019iclr-mode/}
}