Generalized Loss-Sensitive Adversarial Learning with Manifold Margins

Abstract

The classic Generative Adversarial Net and its variants can be roughly categorized into two large families: the unregularized ver- sus regularized GANs. By relaxing the non-parametric assumption on the discriminator in the classic GAN, the regularized GANs have better generalization ability to produce new samples drawn from the real dis- tribution. It is well known that the real data like natural images are not uniformly distributed over the whole data space. Instead, they are often restricted to a low-dimensional manifold of the ambient space. Such a manifold assumption suggests the distance over the manifold should be a better measure to characterize the distinct between real and fake sam- ples. Thus, we define a pullback operator to map samples back to their data manifold, and a manifold margin is defined as the distance between the pullback representations to distinguish between real and fake sam- ples and learn the optimal generators. We justify the effectiveness of the proposed model both theoretically and empirically.

Cite

Text

Edraki and Qi. "Generalized Loss-Sensitive Adversarial Learning with Manifold Margins." Proceedings of the European Conference on Computer Vision (ECCV), 2018. doi:10.1007/978-3-030-01228-1_6

Markdown

[Edraki and Qi. "Generalized Loss-Sensitive Adversarial Learning with Manifold Margins." Proceedings of the European Conference on Computer Vision (ECCV), 2018.](https://mlanthology.org/eccv/2018/edraki2018eccv-generalized/) doi:10.1007/978-3-030-01228-1_6

BibTeX

@inproceedings{edraki2018eccv-generalized,
  title     = {{Generalized Loss-Sensitive Adversarial Learning with Manifold Margins}},
  author    = {Edraki, Marzieh and Qi, Guo-Jun},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2018},
  doi       = {10.1007/978-3-030-01228-1_6},
  url       = {https://mlanthology.org/eccv/2018/edraki2018eccv-generalized/}
}