D-Flow: Differentiating Through Flows for Controlled Generation

Abstract

Taming the generation outcome of state of the art Diffusion and Flow-Matching (FM) models without having to re-train a task-specific model unlocks a powerful tool for solving inverse problems, conditional generation, and controlled generation in general. In this work we introduce D-Flow, a simple framework for controlling the generation process by differentiating through the flow, optimizing for the source (noise) point. We motivate this framework by our key observation stating that for Diffusion/FM models trained with Gaussian probability paths, differentiating through the generation process projects gradient on the data manifold, implicitly injecting the prior into the optimization process. We validate our framework on linear and non-linear controlled generation problems including: image and audio inverse problems and conditional molecule generation reaching state of the art performance across all.

Cite

Text

Ben-Hamu et al. "D-Flow: Differentiating Through Flows for Controlled Generation." International Conference on Machine Learning, 2024.

Markdown

[Ben-Hamu et al. "D-Flow: Differentiating Through Flows for Controlled Generation." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/benhamu2024icml-dflow/)

BibTeX

@inproceedings{benhamu2024icml-dflow,
  title     = {{D-Flow: Differentiating Through Flows for Controlled Generation}},
  author    = {Ben-Hamu, Heli and Puny, Omri and Gat, Itai and Karrer, Brian and Singer, Uriel and Lipman, Yaron},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {3462-3483},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/benhamu2024icml-dflow/}
}