Normalizing Flow Neural Networks by JKO Scheme
Abstract
Normalizing flow is a class of deep generative models for efficient sampling and likelihood estimation, which achieves attractive performance, particularly in high dimensions. The flow is often implemented using a sequence of invertible residual blocks. Existing works adopt special network architectures and regularization of flow trajectories. In this paper, we develop a neural ODE flow network called JKO-iFlow, inspired by the Jordan-Kinderleherer-Otto (JKO) scheme, which unfolds the discrete-time dynamic of the Wasserstein gradient flow. The proposed method stacks residual blocks one after another, allowing efficient block-wise training of the residual blocks, avoiding sampling SDE trajectories and score matching or variational learning, thus reducing the memory load and difficulty in end-to-end training. We also develop adaptive time reparameterization of the flow network with a progressive refinement of the induced trajectory in probability space to improve the model accuracy further. Experiments with synthetic and real data show that the proposed JKO-iFlow network achieves competitive performance compared with existing flow and diffusion models at a significantly reduced computational and memory cost.
Cite
Text
Xu et al. "Normalizing Flow Neural Networks by JKO Scheme." NeurIPS 2023 Workshops: OTML, 2023.Markdown
[Xu et al. "Normalizing Flow Neural Networks by JKO Scheme." NeurIPS 2023 Workshops: OTML, 2023.](https://mlanthology.org/neuripsw/2023/xu2023neuripsw-normalizing/)BibTeX
@inproceedings{xu2023neuripsw-normalizing,
title = {{Normalizing Flow Neural Networks by JKO Scheme}},
author = {Xu, Chen and Cheng, Xiuyuan and Xie, Yao},
booktitle = {NeurIPS 2023 Workshops: OTML},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/xu2023neuripsw-normalizing/}
}