Temporal Perception and Prediction in Ego-Centric Video
Abstract
Given a video of an activity, can we predict what will happen next? In this paper we explore two simple tasks related to temporal prediction in egocentric videos of everyday activities. We provide both human experiments to understand how well people can perform on these tasks and computational models for prediction. Experiments indicate that humans and computers can do well on temporal prediction and that personalization to a particular individual or environment provides significantly increased performance. Developing methods for temporal prediction could have far reaching benefits for robots or intelligent agents to anticipate what a person will do, before they do it.
Cite
Text
Zhou and Berg. "Temporal Perception and Prediction in Ego-Centric Video." International Conference on Computer Vision, 2015. doi:10.1109/ICCV.2015.511Markdown
[Zhou and Berg. "Temporal Perception and Prediction in Ego-Centric Video." International Conference on Computer Vision, 2015.](https://mlanthology.org/iccv/2015/zhou2015iccv-temporal/) doi:10.1109/ICCV.2015.511BibTeX
@inproceedings{zhou2015iccv-temporal,
title = {{Temporal Perception and Prediction in Ego-Centric Video}},
author = {Zhou, Yipin and Berg, Tamara L.},
booktitle = {International Conference on Computer Vision},
year = {2015},
doi = {10.1109/ICCV.2015.511},
url = {https://mlanthology.org/iccv/2015/zhou2015iccv-temporal/}
}