Video Diffusion Models

Abstract

Generating temporally coherent high fidelity video is an important milestone in generative modeling research. We make progress towards this milestone by proposing a diffusion model for video generation that shows very promising initial results. Our model is a natural extension of the standard image diffusion architecture, and it enables jointly training from image and video data, which we find to reduce the variance of minibatch gradients and speed up optimization. To generate long and higher resolution videos we introduce a new conditional sampling technique for spatial and temporal video extension that performs better than previously proposed methods. We present the first results on a large text-conditioned video generation task, as well as state-of-the-art results on an established unconditional video generation benchmark. Supplementary material is available at https://video-diffusion.github.io.

Cite

Text

Ho et al. "Video Diffusion Models." ICLR 2022 Workshops: DGM4HSD, 2022.

Markdown

[Ho et al. "Video Diffusion Models." ICLR 2022 Workshops: DGM4HSD, 2022.](https://mlanthology.org/iclrw/2022/ho2022iclrw-video/)

BibTeX

@inproceedings{ho2022iclrw-video,
  title     = {{Video Diffusion Models}},
  author    = {Ho, Jonathan and Salimans, Tim and Gritsenko, Alexey A. and Chan, William and Norouzi, Mohammad and Fleet, David J.},
  booktitle = {ICLR 2022 Workshops: DGM4HSD},
  year      = {2022},
  url       = {https://mlanthology.org/iclrw/2022/ho2022iclrw-video/}
}