Re-GAN: Data-Efficient GANs Training via Architectural Reconfiguration
Abstract
Training Generative Adversarial Networks (GANs) on high-fidelity images usually requires a vast number of training images. Recent research on GAN tickets reveals that dense GANs models contain sparse sub-networks or "lottery tickets" that, when trained separately, yield better results under limited data. However, finding GANs tickets requires an expensive process of train-prune-retrain. In this paper, we propose Re-GAN, a data-efficient GANs training that dynamically reconfigures GANs architecture during training to explore different sub-network structures in training time. Our method repeatedly prunes unimportant connections to regularize GANs network and regrows them to reduce the risk of prematurely pruning important connections. Re-GAN stabilizes the GANs models with less data and offers an alternative to the existing GANs tickets and progressive growing methods. We demonstrate that Re-GAN is a generic training methodology which achieves stability on datasets of varying sizes, domains, and resolutions (CIFAR-10, Tiny-ImageNet, and multiple few-shot generation datasets) as well as different GANs architectures (SNGAN, ProGAN, StyleGAN2 and AutoGAN). Re-GAN also improves performance when combined with the recent augmentation approaches. Moreover, Re-GAN requires fewer floating-point operations (FLOPs) and less training time by removing the unimportant connections during GANs training while maintaining comparable or even generating higher-quality samples. When compared to state-of-the-art StyleGAN2, our method outperforms without requiring any additional fine-tuning step. Code can be found at this link: https://github.com/IntellicentAI-Lab/Re-GAN
Cite
Text
Saxena et al. "Re-GAN: Data-Efficient GANs Training via Architectural Reconfiguration." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.01557Markdown
[Saxena et al. "Re-GAN: Data-Efficient GANs Training via Architectural Reconfiguration." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/saxena2023cvpr-regan/) doi:10.1109/CVPR52729.2023.01557BibTeX
@inproceedings{saxena2023cvpr-regan,
title = {{Re-GAN: Data-Efficient GANs Training via Architectural Reconfiguration}},
author = {Saxena, Divya and Cao, Jiannong and Xu, Jiahao and Kulshrestha, Tarun},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2023},
pages = {16230-16240},
doi = {10.1109/CVPR52729.2023.01557},
url = {https://mlanthology.org/cvpr/2023/saxena2023cvpr-regan/}
}