Action Anticipation by Predicting Future Dynamic Images
Abstract
Human action-anticipation methods predict what is the future action by observing only a few portion of an action in progress. This is critical for applications where computers have to react to human actions as early as possible such as autonomous driving, human-robotic interaction, assistive robotics among others. In this paper, we present a method for human action anticipation by predicting the most plausible future human motion. We represent human motion using Dynamic Images [1] and make use of tailored loss functions to encourage a generative model to produce accurate future motion prediction. Our method outperforms the currently best performing action-anticipation methods by 4% on JHMDB-21, 5.2% on UT-Interaction and 5.1% on UCF 101-24 benchmarks.
Cite
Text
Opazo et al. "Action Anticipation by Predicting Future Dynamic Images." European Conference on Computer Vision Workshops, 2018. doi:10.1007/978-3-030-11015-4_10Markdown
[Opazo et al. "Action Anticipation by Predicting Future Dynamic Images." European Conference on Computer Vision Workshops, 2018.](https://mlanthology.org/eccvw/2018/opazo2018eccvw-action/) doi:10.1007/978-3-030-11015-4_10BibTeX
@inproceedings{opazo2018eccvw-action,
title = {{Action Anticipation by Predicting Future Dynamic Images}},
author = {Opazo, Cristian Rodriguez and Fernando, Basura and Li, Hongdong},
booktitle = {European Conference on Computer Vision Workshops},
year = {2018},
pages = {89-105},
doi = {10.1007/978-3-030-11015-4_10},
url = {https://mlanthology.org/eccvw/2018/opazo2018eccvw-action/}
}