A Decentralized Parallel Algorithm for Training Generative Adversarial Nets
Abstract
Generative Adversarial Networks (GANs) are a powerful class of generative models in the deep learning community. Current practice on large-scale GAN training utilizes large models and distributed large-batch training strategies, and is implemented on deep learning frameworks (e.g., TensorFlow, PyTorch, etc.) designed in a centralized manner. In the centralized network topology, every worker needs to either directly communicate with the central node or indirectly communicate with all other workers in every iteration. However, when the network bandwidth is low or network latency is high, the performance would be significantly degraded. Despite recent progress on decentralized algorithms for training deep neural networks, it remains unclear whether it is possible to train GANs in a decentralized manner. The main difficulty lies at handling the nonconvex-nonconcave min-max optimization and the decentralized communication simultaneously. In this paper, we address this difficulty by designing the \textbf{first gradient-based decentralized parallel algorithm} which allows workers to have multiple rounds of communications in one iteration and to update the discriminator and generator simultaneously, and this design makes it amenable for the convergence analysis of the proposed decentralized algorithm. Theoretically, our proposed decentralized algorithm is able to solve a class of non-convex non-concave min-max problems with provable non-asymptotic convergence to first-order stationary point. Experimental results on GANs demonstrate the effectiveness of the proposed algorithm.
Cite
Text
Liu et al. "A Decentralized Parallel Algorithm for Training Generative Adversarial Nets." Neural Information Processing Systems, 2020.Markdown
[Liu et al. "A Decentralized Parallel Algorithm for Training Generative Adversarial Nets." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/liu2020neurips-decentralized/)BibTeX
@inproceedings{liu2020neurips-decentralized,
title = {{A Decentralized Parallel Algorithm for Training Generative Adversarial Nets}},
author = {Liu, Mingrui and Zhang, Wei and Mroueh, Youssef and Cui, Xiaodong and Ross, Jarret and Yang, Tianbao and Das, Payel},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/liu2020neurips-decentralized/}
}