Generative Networks as Inverse Problems with Scattering Transforms

Abstract

Generative Adversarial Nets (GANs) and Variational Auto-Encoders (VAEs) provide impressive image generations from Gaussian white noise, but the underlying mathematics are not well understood. We compute deep convolutional network generators by inverting a fixed embedding operator. Therefore, they do not require to be optimized with a discriminator or an encoder. The embedding is Lipschitz continuous to deformations so that generators transform linear interpolations between input white noise vectors into deformations between output images. This embedding is computed with a wavelet Scattering transform. Numerical experiments demonstrate that the resulting Scattering generators have similar properties as GANs or VAEs, without learning a discriminative network or an encoder.

Cite

Text

Angles and Mallat. "Generative Networks as Inverse Problems with Scattering Transforms." International Conference on Learning Representations, 2018.

Markdown

[Angles and Mallat. "Generative Networks as Inverse Problems with Scattering Transforms." International Conference on Learning Representations, 2018.](https://mlanthology.org/iclr/2018/angles2018iclr-generative/)

BibTeX

@inproceedings{angles2018iclr-generative,
  title     = {{Generative Networks as Inverse Problems with Scattering Transforms}},
  author    = {Angles, Tomás and Mallat, Stéphane},
  booktitle = {International Conference on Learning Representations},
  year      = {2018},
  url       = {https://mlanthology.org/iclr/2018/angles2018iclr-generative/}
}