Accelerating Diffusion-Based Combinatorial Optimization Solvers by Progressive Distillation
Abstract
Graph-based diffusion models have shown promising results in terms of generating high-quality solutions to NP-complete (NPC) combinatorial optimization (CO) problems. However, those models are often inefficient in inference, due to the iterative evaluation nature of the denoising diffusion process. This paper proposes to use $\textit {progressive}$ distillation to speed up the inference by taking fewer steps (e.g., forecasting two steps ahead within a single step) during the denoising process. Our experimental results show that the progressively distilled model can perform inference $\textbf{16}$ times faster with only $\textbf{0.019}$% degradation in performance on the TSP-50 dataset.
Cite
Text
Huang et al. "Accelerating Diffusion-Based Combinatorial Optimization Solvers by Progressive Distillation." ICML 2023 Workshops: SODS, 2023.Markdown
[Huang et al. "Accelerating Diffusion-Based Combinatorial Optimization Solvers by Progressive Distillation." ICML 2023 Workshops: SODS, 2023.](https://mlanthology.org/icmlw/2023/huang2023icmlw-accelerating/)BibTeX
@inproceedings{huang2023icmlw-accelerating,
title = {{Accelerating Diffusion-Based Combinatorial Optimization Solvers by Progressive Distillation}},
author = {Huang, Junwei and Sun, Zhiqing and Yang, Yiming},
booktitle = {ICML 2023 Workshops: SODS},
year = {2023},
url = {https://mlanthology.org/icmlw/2023/huang2023icmlw-accelerating/}
}