A Layer-Based Sequential Framework for Scene Generation with GANs
Abstract
The visual world we sense, interpret and interact everyday is a complex composition of interleaved physical entities. Therefore, it is a very challenging task to generate vivid scenes of similar complexity using computers. In this work, we present a scene generation framework based on Generative Adversarial Networks (GANs) to sequentially compose a scene, breaking down the underlying problem into smaller ones. Different than the existing approaches, our framework offers an explicit control over the elements of a scene through separate background and foreground generators. Starting with an initially generated background, foreground objects then populate the scene one-by-one in a sequential manner. Via quantitative and qualitative experiments on a subset of the MS-COCO dataset, we show that our proposed framework produces not only more diverse images but also copes better with affine transformations and occlusion artifacts of foreground objects than its counterparts.
Cite
Text
Turkoglu et al. "A Layer-Based Sequential Framework for Scene Generation with GANs." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.33018901Markdown
[Turkoglu et al. "A Layer-Based Sequential Framework for Scene Generation with GANs." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/turkoglu2019aaai-layer/) doi:10.1609/AAAI.V33I01.33018901BibTeX
@inproceedings{turkoglu2019aaai-layer,
title = {{A Layer-Based Sequential Framework for Scene Generation with GANs}},
author = {Turkoglu, Mehmet Ozgur and Thong, William and Spreeuwers, Luuk J. and Kicanaoglu, Berkay},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2019},
pages = {8901-8908},
doi = {10.1609/AAAI.V33I01.33018901},
url = {https://mlanthology.org/aaai/2019/turkoglu2019aaai-layer/}
}