Image2GIF: Generating Cinemagraphs Using Recurrent Deep Q-Networks

Abstract

Given a still photograph, one can imagine how dynamic objects might move against a static background. This idea has been actualized in the form of cinemagraphs, where the motion of particular objects within a still image is repeated, giving the viewer a sense of animation. In this paper, we learn computational models that can generate cinemagraph sequences automatically given a single image. To generate cinemagraphs, we explore combining generative models with a recurrent neural network and deep Q-networks to enhance the power of sequence generation. To enable and evaluate these models we make use of two datasets, one synthetically generated and the other containing real video generated cinemagraphs. Both qualitative and quantitative evaluations demonstrate the effectiveness of our models on the synthetic and real datasets.

Cite

Text

Zhou et al. "Image2GIF: Generating Cinemagraphs Using Recurrent Deep Q-Networks." IEEE/CVF Winter Conference on Applications of Computer Vision, 2018. doi:10.1109/WACV.2018.00025

Markdown

[Zhou et al. "Image2GIF: Generating Cinemagraphs Using Recurrent Deep Q-Networks." IEEE/CVF Winter Conference on Applications of Computer Vision, 2018.](https://mlanthology.org/wacv/2018/zhou2018wacv-image/) doi:10.1109/WACV.2018.00025

BibTeX

@inproceedings{zhou2018wacv-image,
  title     = {{Image2GIF: Generating Cinemagraphs Using Recurrent Deep Q-Networks}},
  author    = {Zhou, Yipin and Song, Yale and Berg, Tamara L.},
  booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
  year      = {2018},
  pages     = {170-178},
  doi       = {10.1109/WACV.2018.00025},
  url       = {https://mlanthology.org/wacv/2018/zhou2018wacv-image/}
}