Learning Disconnected Manifolds: A No GAN’s Land
Abstract
Typical architectures of Generative Adversarial Networks make use of a unimodal latent/input distribution transformed by a continuous generator. Consequently, the modeled distribution always has connected support which is cumbersome when learning a disconnected set of manifolds. We formalize this problem by establishing a "no free lunch" theorem for the disconnected manifold learning stating an upper-bound on the precision of the targeted distribution. This is done by building on the necessary existence of a low-quality region where the generator continuously samples data between two disconnected modes. Finally, we derive a rejection sampling method based on the norm of generator’s Jacobian and show its efficiency on several generators including BigGAN.
Cite
Text
Tanielian et al. "Learning Disconnected Manifolds: A No GAN’s Land." International Conference on Machine Learning, 2020.Markdown
[Tanielian et al. "Learning Disconnected Manifolds: A No GAN’s Land." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/tanielian2020icml-learning/)BibTeX
@inproceedings{tanielian2020icml-learning,
title = {{Learning Disconnected Manifolds: A No GAN’s Land}},
author = {Tanielian, Ugo and Issenhuth, Thibaut and Dohmatob, Elvis and Mary, Jeremie},
booktitle = {International Conference on Machine Learning},
year = {2020},
pages = {9418-9427},
volume = {119},
url = {https://mlanthology.org/icml/2020/tanielian2020icml-learning/}
}