Cubic LSTMs for Video Prediction
Abstract
Predicting future frames in videos has become a promising direction of research for both computer vision and robot learning communities. The core of this problem involves moving object capture and future motion prediction. While object capture specifies which objects are moving in videos, motion prediction describes their future dynamics. Motivated by this analysis, we propose a Cubic Long Short-Term Memory (CubicLSTM) unit for video prediction. CubicLSTM consists of three branches, i.e., a spatial branch for capturing moving objects, a temporal branch for processing motions, and an output branch for combining the first two branches to generate predicted frames. Stacking multiple CubicLSTM units along the spatial branch and output branch, and then evolving along the temporal branch can form a cubic recurrent neural network (CubicRNN). Experiment shows that CubicRNN produces more accurate video predictions than prior methods on both synthetic and real-world datasets.
Cite
Text
Fan et al. "Cubic LSTMs for Video Prediction." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.33018263Markdown
[Fan et al. "Cubic LSTMs for Video Prediction." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/fan2019aaai-cubic/) doi:10.1609/AAAI.V33I01.33018263BibTeX
@inproceedings{fan2019aaai-cubic,
title = {{Cubic LSTMs for Video Prediction}},
author = {Fan, Hehe and Zhu, Linchao and Yang, Yi},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2019},
pages = {8263-8270},
doi = {10.1609/AAAI.V33I01.33018263},
url = {https://mlanthology.org/aaai/2019/fan2019aaai-cubic/}
}