An Uncertain Future: Forecasting from Static Images Using Variational Autoencoders
Abstract
In a given scene, humans can easily predict a set of immediate future events that might happen. However, pixel-level anticipation in computer vision is difficult because machine learning struggles with the ambiguity in predicting the future. In this paper, we focus on predicting the dense trajectory of pixels in a scene—what will move in the scene, where it will travel, and how it will deform over the course of one second. We propose a conditional variational autoencoder as a solution to this problem. In this framework, direct inference from the image shapes the distribution of possible trajectories while latent variables encode information that is not available in the image. We show that our method predicts events in a variety of scenes and can produce multiple different predictions for an ambiguous future. We also find that our method learns a representation that is applicable to semantic vision tasks.
Cite
Text
Walker et al. "An Uncertain Future: Forecasting from Static Images Using Variational Autoencoders." European Conference on Computer Vision, 2016. doi:10.1007/978-3-319-46478-7_51Markdown
[Walker et al. "An Uncertain Future: Forecasting from Static Images Using Variational Autoencoders." European Conference on Computer Vision, 2016.](https://mlanthology.org/eccv/2016/walker2016eccv-uncertain/) doi:10.1007/978-3-319-46478-7_51BibTeX
@inproceedings{walker2016eccv-uncertain,
title = {{An Uncertain Future: Forecasting from Static Images Using Variational Autoencoders}},
author = {Walker, Jacob and Doersch, Carl and Gupta, Abhinav and Hebert, Martial},
booktitle = {European Conference on Computer Vision},
year = {2016},
pages = {835-851},
doi = {10.1007/978-3-319-46478-7_51},
url = {https://mlanthology.org/eccv/2016/walker2016eccv-uncertain/}
}