Generative Trajectory Stitching Through Diffusion Composition
Abstract
Effective trajectory stitching for long-horizon planning is a significant challenge in robotic decision-making. While diffusion models have shown promise in planning, they are limited to solving tasks similar to those seen in their training data. We propose CompDiffuser, a novel generative approach that can solve new tasks by learning to compositionally stitch together shorter trajectory chunks from previously seen tasks. Our key insight is modeling the trajectory distribution by subdividing it into overlapping chunks and learning their conditional relationships through a single bidirectional diffusion model. This allows information to propagate between segments during generation, ensuring physically consistent connections. We conduct experiments on benchmark tasks of various difficulties, covering different environment sizes, agent state dimension, trajectory types, training data quality, and show that CompDiffuser significantly outperforms existing methods.
Cite
Text
Luo et al. "Generative Trajectory Stitching Through Diffusion Composition." Advances in Neural Information Processing Systems, 2025.Markdown
[Luo et al. "Generative Trajectory Stitching Through Diffusion Composition." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/luo2025neurips-generative/)BibTeX
@inproceedings{luo2025neurips-generative,
title = {{Generative Trajectory Stitching Through Diffusion Composition}},
author = {Luo, Yunhao and Mishra, Utkarsh Aashu and Du, Yilun and Xu, Danfei},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/luo2025neurips-generative/}
}