An Empirical Study of Policy Interpolation via Diffusion Models

Abstract

Diffusion-based policies have shown great potential in multi-task settings, as they can solve new tasks without additional training through inference-time steering. In this paper, we explore the inference-time composition of diffusion-based policies using various interpolation methods. Our results show that, while existing methods merely switch between predefined action modes, our proposed approach can generate entirely new action patterns by leveraging existing policies, all without the need for further training or tuning.

Cite

Text

Xie et al. "An Empirical Study of Policy Interpolation via Diffusion Models." ICLR 2025 Workshops: MCDC, 2025.

Markdown

[Xie et al. "An Empirical Study of Policy Interpolation via Diffusion Models." ICLR 2025 Workshops: MCDC, 2025.](https://mlanthology.org/iclrw/2025/xie2025iclrw-empirical/)

BibTeX

@inproceedings{xie2025iclrw-empirical,
  title     = {{An Empirical Study of Policy Interpolation via Diffusion Models}},
  author    = {Xie, Yuqing and Yu, Chao and Zhang, Ya and Wang, Yu},
  booktitle = {ICLR 2025 Workshops: MCDC},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/xie2025iclrw-empirical/}
}