CogVideo: Large-Scale Pretraining for Text-to-Video Generation via Transformers
Abstract
In this work, we present CogVideo, a 9B-parameter transformer for text-to-video generation. The CogVideo model has been trained by inheriting a pretrained text-to-image model, CogView2, which significantly reduces the training cost and alleviates the problem of scarcity and weak relevance. We also propose a multi-frame-rate training strategy for better aligning text and video clips. CogVideo achieves state-of-the-art performance in machine evaluation and outperforms publicly available models by a large margin in human evaluation. Its codes and model are also publicly available at https://github.com/THUDM/CogVideo.
Cite
Text
Hong et al. "CogVideo: Large-Scale Pretraining for Text-to-Video Generation via Transformers." International Conference on Learning Representations, 2023.Markdown
[Hong et al. "CogVideo: Large-Scale Pretraining for Text-to-Video Generation via Transformers." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/hong2023iclr-cogvideo/)BibTeX
@inproceedings{hong2023iclr-cogvideo,
title = {{CogVideo: Large-Scale Pretraining for Text-to-Video Generation via Transformers}},
author = {Hong, Wenyi and Ding, Ming and Zheng, Wendi and Liu, Xinghan and Tang, Jie},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/hong2023iclr-cogvideo/}
}