Greed Is Good: A Unifying Perspective on Guided Generation

Abstract

Training-free guided generation is a widely used and powerful technique that allows the end user to exert further control over the generative process of flow/diffusion models. Generally speaking, two families of techniques have emerged for solving this problem for *gradient-based guidance*: namely, *posterior guidance* (*i.e.*, guidance via projecting the current sample to the target distribution via the target prediction model) and *end-to-end guidance* (*i.e.*, guidance by performing backpropagation throughout the entire ODE solve). In this work, we show that these two seemingly separate families can actually be *unified* by looking at posterior guidance as a *greedy strategy* of *end-to-end guidance*. We explore the theoretical connections between these two families and provide an in-depth theoretical of these two techniques relative to the *continuous ideal gradients*. Motivated by this analysis we then show a method for *interpolating* between these two families enabling a trade-off between compute and accuracy of the guidance gradients. We then validate this work on several inverse image problems and property-guided molecular generation.

Cite

Text

Blasingame and Liu. "Greed Is Good: A Unifying Perspective on Guided Generation." Advances in Neural Information Processing Systems, 2025.

Markdown

[Blasingame and Liu. "Greed Is Good: A Unifying Perspective on Guided Generation." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/blasingame2025neurips-greed/)

BibTeX

@inproceedings{blasingame2025neurips-greed,
  title     = {{Greed Is Good: A Unifying Perspective on Guided Generation}},
  author    = {Blasingame, Zander W. and Liu, Chen},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/blasingame2025neurips-greed/}
}