Repulsive Latent Score Distillation for Solving Inverse Problems

Abstract

Score Distillation Sampling (SDS) has been pivotal for leveraging pre-trained diffusion models in downstream tasks such as inverse problems, but it faces two major challenges: $(i)$ mode collapse and $(ii)$ latent space inversion, which become more pronounced in high-dimensional data. To address mode collapse, we introduce a novel variational framework for posterior sampling. Utilizing the Wasserstein gradient flow interpretation of SDS, we propose a multimodal variational approximation with a \emph{repulsion} mechanism that promotes diversity among particles by penalizing pairwise kernel-based similarity. This repulsion acts as a simple regularizer, encouraging a more diverse set of solutions. To mitigate latent space ambiguity, we extend this framework with an \emph{augmented} variational distribution that disentangles the latent and data. This repulsive augmented formulation balances computational efficiency, quality, and diversity. Extensive experiments on linear and nonlinear inverse tasks with high-resolution images ($512 \times 512$) using pre-trained Stable Diffusion models demonstrate the effectiveness of our approach.

Cite

Text

Zilberstein et al. "Repulsive Latent Score Distillation for Solving Inverse Problems." International Conference on Learning Representations, 2025.

Markdown

[Zilberstein et al. "Repulsive Latent Score Distillation for Solving Inverse Problems." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/zilberstein2025iclr-repulsive/)

BibTeX

@inproceedings{zilberstein2025iclr-repulsive,
  title     = {{Repulsive Latent Score Distillation for Solving Inverse Problems}},
  author    = {Zilberstein, Nicolas and Mardani, Morteza and Segarra, Santiago},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/zilberstein2025iclr-repulsive/}
}