Amortizing Intractable Inference in Diffusion Models for Vision, Language, and Control
Abstract
Diffusion models have emerged as effective distribution estimators in vision, language, and reinforcement learning, but their use as priors in downstream tasks poses an intractable posterior inference problem. This paper studies *amortized* sampling of the posterior over data, $\mathbf{x}\sim p^{\rm post}(\mathbf{x})\propto p(\mathbf{x})r(\mathbf{x})$, in a model that consists of a diffusion generative model prior $p(\mathbf{x})$ and a black-box constraint or likelihood function $r(\mathbf{x})$. We state and prove the asymptotic correctness of a data-free learning objective, *relative trajectory balance*, for training a diffusion model that samples from this posterior, a problem that existing methods solve only approximately or in restricted cases. Relative trajectory balance arises from the generative flow network perspective on diffusion models, which allows the use of deep reinforcement learning techniques to improve mode coverage. Experiments illustrate the broad potential of unbiased inference of arbitrary posteriors under diffusion priors: in vision (classifier guidance), language (infilling under a discrete diffusion LLM), and multimodal data (text-to-image generation). Beyond generative modeling, we apply relative trajectory balance to the problem of continuous control with a score-based behavior prior, achieving state-of-the-art results on benchmarks in offline reinforcement learning. Code is available at [this link](https://github.com/GFNOrg/diffusion-finetuning).
Cite
Text
Venkatraman et al. "Amortizing Intractable Inference in Diffusion Models for Vision, Language, and Control." Neural Information Processing Systems, 2024. doi:10.52202/079017-2422Markdown
[Venkatraman et al. "Amortizing Intractable Inference in Diffusion Models for Vision, Language, and Control." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/venkatraman2024neurips-amortizing/) doi:10.52202/079017-2422BibTeX
@inproceedings{venkatraman2024neurips-amortizing,
title = {{Amortizing Intractable Inference in Diffusion Models for Vision, Language, and Control}},
author = {Venkatraman, Siddarth and Jain, Moksh and Scimeca, Luca and Kim, Minsu and Sendera, Marcin and Hasan, Mohsin and Rowe, Luke and Mittal, Sarthak and Lemos, Pablo and Bengio, Emmanuel and Adam, Alexandre and Rector-Brooks, Jarrid and Bengio, Yoshua and Berseth, Glen and Malkin, Nikolay},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-2422},
url = {https://mlanthology.org/neurips/2024/venkatraman2024neurips-amortizing/}
}