TwoStreamVAN: Improving Motion Modeling in Video Generation
Abstract
Video generation is an inherently challenging task, as it requires modeling realistic temporal dynamics as well as spatial content. Existing methods entangle the two intrinsically different tasks of motion and content creation in a single generator network, but this approach struggles to simultaneously generate plausible motion and content. To im-prove motion modeling in video generation tasks, we propose a two-stream model that disentangles motion generation from content generation, called a Two-Stream Variational Adversarial Network (TwoStreamVAN). Given an action label and a noise vector, our model is able to create clear and consistent motion, and thus yields photorealistic videos. The key idea is to progressively generate and fuse multi-scale motion with its corresponding spatial content. Our model significantly outperforms existing methods on the standard Weizmann Human Action, MUG Facial Expression, and VoxCeleb datasets, as well as our new dataset of diverse human actions with challenging and complex motion. Our code is available at https://github.com/sunxm2357/TwoStreamVAN/.
Cite
Text
Sun et al. "TwoStreamVAN: Improving Motion Modeling in Video Generation." Winter Conference on Applications of Computer Vision, 2020.Markdown
[Sun et al. "TwoStreamVAN: Improving Motion Modeling in Video Generation." Winter Conference on Applications of Computer Vision, 2020.](https://mlanthology.org/wacv/2020/sun2020wacv-twostreamvan/)BibTeX
@inproceedings{sun2020wacv-twostreamvan,
title = {{TwoStreamVAN: Improving Motion Modeling in Video Generation}},
author = {Sun, Ximeng and Xu, Huijuan and Saenko, Kate},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2020},
url = {https://mlanthology.org/wacv/2020/sun2020wacv-twostreamvan/}
}