In-Context Learning Unlocked for Diffusion Models
Abstract
We present Prompt Diffusion, a framework for enabling in-context learning in diffusion-based generative models. Given a pair of task-specific example images, such as depth from/to image and scribble from/to image, and a text guidance, our model automatically understands the underlying task and performs the same task on a new query image following the text guidance. To achieve this, we propose a vision-language prompt that can model a wide range of vision-language tasks and a diffusion model that takes it as input. The diffusion model is trained jointly on six different tasks using these prompts. The resulting Prompt Diffusion model becomes the first diffusion-based vision-language foundation model capable of in-context learning. It demonstrates high-quality in-context generation for the trained tasks and effectively generalizes to new, unseen vision tasks using their respective prompts. Our model also shows compelling text-guided image editing results. Our framework aims to facilitate research into in-context learning for computer vision. We share our code and pre-trained models at https://github.com/Zhendong-Wang/Prompt-Diffusion.
Cite
Text
Wang et al. "In-Context Learning Unlocked for Diffusion Models." Neural Information Processing Systems, 2023.Markdown
[Wang et al. "In-Context Learning Unlocked for Diffusion Models." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/wang2023neurips-incontext/)BibTeX
@inproceedings{wang2023neurips-incontext,
title = {{In-Context Learning Unlocked for Diffusion Models}},
author = {Wang, Zhendong and Jiang, Yifan and Lu, Yadong and Shen, Yelong and He, Pengcheng and Chen, Weizhu and Wang, Zhangyang "Atlas" and Zhou, Mingyuan},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/wang2023neurips-incontext/}
}