Sparsity Aware Normalization for GANs
Abstract
Generative adversarial networks (GANs) are known to benefit from regularization or normalization of their critic (discriminator) network during training. In this paper, we analyze the popular spectral normalization scheme, find a significant drawback and introduce sparsity aware normalization (SAN), a new alternative approach for stabilizing GAN training. As opposed to other normalization methods, our approach explicitly accounts for the sparse nature of the feature maps in convolutional networks with ReLU activations. We illustrate the effectiveness of our method through extensive experiments with a variety of network architectures. As we show, sparsity is particularly dominant in critics used for image-to-image translation settings. In these cases our approach improves upon existing methods, in less training epochs and with smaller capacity networks, while requiring practically no computational overhead.
Cite
Text
Kligvasser and Michaeli. "Sparsity Aware Normalization for GANs." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I9.16996Markdown
[Kligvasser and Michaeli. "Sparsity Aware Normalization for GANs." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/kligvasser2021aaai-sparsity/) doi:10.1609/AAAI.V35I9.16996BibTeX
@inproceedings{kligvasser2021aaai-sparsity,
title = {{Sparsity Aware Normalization for GANs}},
author = {Kligvasser, Idan and Michaeli, Tomer},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2021},
pages = {8181-8190},
doi = {10.1609/AAAI.V35I9.16996},
url = {https://mlanthology.org/aaai/2021/kligvasser2021aaai-sparsity/}
}