DeepLandscape: Adversarial Modeling of Landscape Videos

Abstract

We build a new model of landscape videos that can be trained on a mixture of static landscape images as well as landscape animations. Our architecture extends StyleGAN model by augmenting it with parts that allow to model dynamic changes in a scene. Once trained, our model can be used to generate realistic time-lapse landscape videos with moving objects and time-of-the-day changes. Furthermore, by fitting the learned models to a static landscape image, the latter can be reenacted in a realistic way. We propose simple but necessary modifications to StyleGAN inversion procedure, which lead to in-domain latent codes and allow to manipulate real images. Quantitative comparisons and user studies suggest that our model produces more compelling animations of given photographs than previously proposed methods. The results of our approach including comparisons with prior art can be seen in supplementary materials and on the project page https://saic-mdal.github.io/deep-landscape/.

Cite

Text

Logacheva et al. "DeepLandscape: Adversarial Modeling of Landscape Videos." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58592-1_16

Markdown

[Logacheva et al. "DeepLandscape: Adversarial Modeling of Landscape Videos." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/logacheva2020eccv-deeplandscape/) doi:10.1007/978-3-030-58592-1_16

BibTeX

@inproceedings{logacheva2020eccv-deeplandscape,
  title     = {{DeepLandscape: Adversarial Modeling of Landscape Videos}},
  author    = {Logacheva, Elizaveta and Suvorov, Roman and Khomenko, Oleg and Mashikhin, Anton and Lempitsky, Victor},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58592-1_16},
  url       = {https://mlanthology.org/eccv/2020/logacheva2020eccv-deeplandscape/}
}