LR-GAN: Layered Recursive Generative Adversarial Networks for Image Generation
Abstract
We present LR-GAN: an adversarial image generation model which takes scene structure and context into account. Unlike previous generative adversarial networks (GANs), the proposed GAN learns to generate image background and foregrounds separately and recursively, and stitch the foregrounds on the background in a contextually relevant manner to produce a complete natural image. For each foreground, the model learns to generate its appearance, shape and pose. The whole model is unsupervised, and is trained in an end-to-end manner with gradient descent methods. The experiments demonstrate that LR-GAN can generate more natural images with objects that are more human recognizable than DCGAN.
Cite
Text
Yang et al. "LR-GAN: Layered Recursive Generative Adversarial Networks for Image Generation." International Conference on Learning Representations, 2017.Markdown
[Yang et al. "LR-GAN: Layered Recursive Generative Adversarial Networks for Image Generation." International Conference on Learning Representations, 2017.](https://mlanthology.org/iclr/2017/yang2017iclr-lr/)BibTeX
@inproceedings{yang2017iclr-lr,
title = {{LR-GAN: Layered Recursive Generative Adversarial Networks for Image Generation}},
author = {Yang, Jianwei and Kannan, Anitha and Batra, Dhruv and Parikh, Devi},
booktitle = {International Conference on Learning Representations},
year = {2017},
url = {https://mlanthology.org/iclr/2017/yang2017iclr-lr/}
}