Generating Streamlining Constraints with Large Language Models

Abstract

Streamlining constraints (or streamliners, for short) narrow the search space, enhancing the speed and feasibility of solving complex constraint satisfaction problems. Traditionally, streamliners were crafted manually or generated through systematically combined atomic constraints with high-effort offline testing. Our approach utilizes the generative capabilities of Large Language Models (LLMs) to propose effective streamliners for problems specified in the MiniZinc constraint programming language and integrates feedback to the LLM with quick empirical tests for validation. Evaluated across seven diverse constraint satisfaction problems, our method achieves substantial runtime reductions. We compare the results to obfuscated and disguised variants of the problem to see whether the results depend on LLM memorization. We also analyze whether longer offline runs improve the quality of streamliners and whether the LLM can propose good combinations of streamliners.

Cite

Text

Voboril et al. "Generating Streamlining Constraints with Large Language Models." Journal of Artificial Intelligence Research, 2025. doi:10.1613/JAIR.1.18965

Markdown

[Voboril et al. "Generating Streamlining Constraints with Large Language Models." Journal of Artificial Intelligence Research, 2025.](https://mlanthology.org/jair/2025/voboril2025jair-generating/) doi:10.1613/JAIR.1.18965

BibTeX

@article{voboril2025jair-generating,
  title     = {{Generating Streamlining Constraints with Large Language Models}},
  author    = {Voboril, Florentina and Ramaswamy, Vaidyanathan Peruvemba and Szeider, Stefan},
  journal   = {Journal of Artificial Intelligence Research},
  year      = {2025},
  doi       = {10.1613/JAIR.1.18965},
  volume    = {84},
  url       = {https://mlanthology.org/jair/2025/voboril2025jair-generating/}
}