Composing Normalizing Flows for Inverse Problems
Abstract
Given an inverse problem with a normalizing flow prior, we wish to estimate the distribution of the underlying signal conditioned on the observations. We approach this problem as a task of conditional inference on the pre-trained unconditional flow model. We first establish that this is computationally hard for a large class of flow models. Motivated by this, we propose a framework for approximate inference that estimates the target conditional as a composition of two flow models. This formulation leads to a stable variational inference training procedure that avoids adversarial training. Our method is evaluated on a variety of inverse problems and is shown to produce high-quality samples with uncertainty quantification. We further demonstrate that our approach can be amortized for zero-shot inference.
Cite
Text
Whang et al. "Composing Normalizing Flows for Inverse Problems." International Conference on Machine Learning, 2021.Markdown
[Whang et al. "Composing Normalizing Flows for Inverse Problems." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/whang2021icml-composing/)BibTeX
@inproceedings{whang2021icml-composing,
title = {{Composing Normalizing Flows for Inverse Problems}},
author = {Whang, Jay and Lindgren, Erik and Dimakis, Alex},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {11158-11169},
volume = {139},
url = {https://mlanthology.org/icml/2021/whang2021icml-composing/}
}