Exploiting Diffusion Prior for Generalizable Dense Prediction
Abstract
Contents generated by recent advanced Text-to-Image (T2I) diffusion models are sometimes too imaginative for existing off-the-shelf dense predictors to estimate due to the immitigable domain gap. We introduce DMP a pipeline utilizing pre-trained T2I models as a prior for dense prediction tasks. To address the misalignment between deterministic prediction tasks and stochastic T2I models we reformulate the diffusion process through a sequence of interpolations establishing a deterministic mapping between input RGB images and output prediction distributions. To preserve generalizability we use low-rank adaptation to fine-tune pre-trained models. Extensive experiments across five tasks including 3D property estimation semantic segmentation and intrinsic image decomposition showcase the efficacy of the proposed method. Despite limited-domain training data the approach yields faithful estimations for arbitrary images surpassing existing state-of-the-art algorithms.
Cite
Text
Lee et al. "Exploiting Diffusion Prior for Generalizable Dense Prediction." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.00751Markdown
[Lee et al. "Exploiting Diffusion Prior for Generalizable Dense Prediction." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/lee2024cvpr-exploiting/) doi:10.1109/CVPR52733.2024.00751BibTeX
@inproceedings{lee2024cvpr-exploiting,
title = {{Exploiting Diffusion Prior for Generalizable Dense Prediction}},
author = {Lee, Hsin-Ying and Tseng, Hung-Yu and Lee, Hsin-Ying and Yang, Ming-Hsuan},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2024},
pages = {7861-7871},
doi = {10.1109/CVPR52733.2024.00751},
url = {https://mlanthology.org/cvpr/2024/lee2024cvpr-exploiting/}
}