Fast and Memory-Efficient Video Diffusion Using Streamlined Inference
Abstract
The rapid progress in artificial intelligence-generated content (AIGC), especially with diffusion models, has significantly advanced development of high-quality video generation. However, current video diffusion models exhibit demanding computational requirements and high peak memory usage, especially for generating longer and higher-resolution videos. These limitations greatly hinder the practical application of video diffusion models on standard hardware platforms. To tackle this issue, we present a novel, training-free framework named Streamlined Inference, which leverages the temporal and spatial properties of video diffusion models. Our approach integrates three core components: Feature Slicer, Operator Grouping, and Step Rehash. Specifically, Feature Slicer effectively partitions input features into sub-features and Operator Grouping processes each sub-feature with a group of consecutive operators, resulting in significant memory reduction without sacrificing the quality or speed. Step Rehash further exploits the similarity between adjacent steps in diffusion, and accelerates inference through skipping unnecessary steps. Extensive experiments demonstrate that our approach significantly reduces peak memory and computational overhead, making it feasible to generate high-quality videos on a single consumer GPU (e.g., reducing peak memory of Animatediff from 42GB to 11GB, featuring faster inference on 2080Ti).
Cite
Text
Zhan et al. "Fast and Memory-Efficient Video Diffusion Using Streamlined Inference." Neural Information Processing Systems, 2024. doi:10.52202/079017-0437Markdown
[Zhan et al. "Fast and Memory-Efficient Video Diffusion Using Streamlined Inference." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/zhan2024neurips-fast/) doi:10.52202/079017-0437BibTeX
@inproceedings{zhan2024neurips-fast,
title = {{Fast and Memory-Efficient Video Diffusion Using Streamlined Inference}},
author = {Zhan, Zheng and Wu, Yushu and Gong, Yifan and Meng, Zichong and Kong, Zhenglun and Yang, Changdi and Yuan, Geng and Zhao, Pu and Niu, Wei and Wang, Yanzhi},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-0437},
url = {https://mlanthology.org/neurips/2024/zhan2024neurips-fast/}
}