Learning Temporal Transformations from Time-Lapse Videos

Abstract

Based on life-long observations of physical, chemical, and biologic phenomena in the natural world, humans can often easily picture in their minds what an object will look like in the future. But, what about computers? In this paper, we learn computational models of object transformations from time-lapse videos. In particular, we explore the use of generative models to create depictions of objects at future times. These models explore several different prediction tasks: generating a future state given a single depiction of an object, generating a future state given two depictions of an object at different times, and generating future states recursively in a recurrent framework. We provide both qualitative and quantitative evaluations of the generated results, and also conduct a human evaluation to compare variations of our models.

Cite

Text

Zhou and Berg. "Learning Temporal Transformations from Time-Lapse Videos." European Conference on Computer Vision, 2016. doi:10.1007/978-3-319-46484-8_16

Markdown

[Zhou and Berg. "Learning Temporal Transformations from Time-Lapse Videos." European Conference on Computer Vision, 2016.](https://mlanthology.org/eccv/2016/zhou2016eccv-learning/) doi:10.1007/978-3-319-46484-8_16

BibTeX

@inproceedings{zhou2016eccv-learning,
  title     = {{Learning Temporal Transformations from Time-Lapse Videos}},
  author    = {Zhou, Yipin and Berg, Tamara L.},
  booktitle = {European Conference on Computer Vision},
  year      = {2016},
  pages     = {262-277},
  doi       = {10.1007/978-3-319-46484-8_16},
  url       = {https://mlanthology.org/eccv/2016/zhou2016eccv-learning/}
}