ReDi: Efficient Learning-Free Diffusion Inference via Trajectory Retrieval
Abstract
Diffusion models show promising generation capability for a variety of data. Despite their high generation quality, the inference for diffusion models is still time-consuming due to the numerous sampling iterations required. To accelerate the inference, we propose ReDi, a simple yet learning-free Retrieval-based Diffusion sampling framework. From a precomputed knowledge base, ReDi retrieves a trajectory similar to the partially generated trajectory at an early stage of generation, skips a large portion of intermediate steps, and continues sampling from a later step in the retrieved trajectory. We theoretically prove that the generation performance of ReDi is guaranteed. Our experiments demonstrate that ReDi improves the model inference efficiency by 2$\times$ speedup. Furthermore, ReDi is able to generalize well in zero-shot cross-domain image generation such as image stylization. The code and demo for ReDi is available at https://github.com/zkx06111/ReDiffusion.
Cite
Text
Zhang et al. "ReDi: Efficient Learning-Free Diffusion Inference via Trajectory Retrieval." International Conference on Machine Learning, 2023.Markdown
[Zhang et al. "ReDi: Efficient Learning-Free Diffusion Inference via Trajectory Retrieval." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/zhang2023icml-redi/)BibTeX
@inproceedings{zhang2023icml-redi,
title = {{ReDi: Efficient Learning-Free Diffusion Inference via Trajectory Retrieval}},
author = {Zhang, Kexun and Yang, Xianjun and Wang, William Yang and Li, Lei},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {41770-41785},
volume = {202},
url = {https://mlanthology.org/icml/2023/zhang2023icml-redi/}
}