Accelerated WGAN Update Strategy with Loss Change Rate Balancing

Abstract

Optimizing the discriminator in Generative Adversarial Networks (GANs) to completion in the inner training loop is computationally prohibitive, and on finite datasets would result in overfitting. To address this, a common update strategy is to alternate between k optimization steps for the discriminator D and one optimization step for the generator G. This strategy is repeated in various GAN algorithms where k is selected empirically. In this paper, we show that this update strategy is not optimal in terms of accuracy and convergence speed, and propose a new update strategy for networks with Wasserstein GAN (WGAN) group related loss functions (e.g. WGAN, WGAN-GP, Deblur GAN, and Super resolution GAN). The proposed update strategy is based on a loss change ratio comparison of G and D. We demonstrate that the proposed strategy improves both convergence speed and accuracy.

Cite

Text

Ouyang et al. "Accelerated WGAN Update Strategy with Loss Change Rate Balancing." Winter Conference on Applications of Computer Vision, 2021.

Markdown

[Ouyang et al. "Accelerated WGAN Update Strategy with Loss Change Rate Balancing." Winter Conference on Applications of Computer Vision, 2021.](https://mlanthology.org/wacv/2021/ouyang2021wacv-accelerated/)

BibTeX

@inproceedings{ouyang2021wacv-accelerated,
  title     = {{Accelerated WGAN Update Strategy with Loss Change Rate Balancing}},
  author    = {Ouyang, Xu and Chen, Ying and Agam, Gady},
  booktitle = {Winter Conference on Applications of Computer Vision},
  year      = {2021},
  pages     = {2546-2555},
  url       = {https://mlanthology.org/wacv/2021/ouyang2021wacv-accelerated/}
}