A Diffusion Model Framework for Unsupervised Neural Combinatorial Optimization

Abstract

Learning to sample from intractable distributions over discrete sets without relying on corresponding training data is a central problem in a wide range of fields, including Combinatorial Optimization. Currently, popular deep learning-based approaches rely primarily on generative models that yield exact sample likelihoods. This work introduces a method that lifts this restriction and opens the possibility to employ highly expressive latent variable models like diffusion models. Our approach is conceptually based on a loss that upper bounds the reverse Kullback-Leibler divergence and evades the requirement of exact sample likelihoods. We experimentally validate our approach in data-free Combinatorial Optimization and demonstrate that our method achieves a new state-of-the-art on a wide range of benchmark problems.

Cite

Text

Sanokowski et al. "A Diffusion Model Framework for Unsupervised Neural Combinatorial Optimization." International Conference on Machine Learning, 2024.

Markdown

[Sanokowski et al. "A Diffusion Model Framework for Unsupervised Neural Combinatorial Optimization." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/sanokowski2024icml-diffusion/)

BibTeX

@inproceedings{sanokowski2024icml-diffusion,
  title     = {{A Diffusion Model Framework for Unsupervised Neural Combinatorial Optimization}},
  author    = {Sanokowski, Sebastian and Hochreiter, Sepp and Lehner, Sebastian},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {43346-43367},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/sanokowski2024icml-diffusion/}
}