Teaching Transformers to Solve Combinatorial Problems Through Efficient Trial & Error
Abstract
Despite their proficiency in various language tasks, Large Language Models (LLMs) struggle with combinatorial problems like Satisfiability, Traveling Salesman Problem, or even basic arithmetic. We address this gap through a novel trial \& error approach for solving problems in the class NP, where candidate solutions are iteratively generated and efficiently validated using verifiers. We focus on the paradigmatic task of Sudoku and achieve state-of-the-art accuracy (99\%) compared to prior neuro-symbolic approaches. Unlike prior work that used custom architectures, our method employs a vanilla decoder-only Transformer (GPT-2) without external tools or function calling. Our method integrates imitation learning of simple Sudoku rules with an explicit Depth-First Search (DFS) exploration strategy involving informed guessing and backtracking. Moving beyond imitation learning, we seek to minimize the number of guesses until reaching a solution. This is achieved using depth-1 guessing, showing empirically that almost all Sudoku can be solved using the puzzle's rules with at most one guess. We provide a rigorous analysis of this setup formalizing its connection to a contextual variant of $\textit{Min-Sum Set Cover}$, a well-studied problem in algorithms and stochastic optimization.
Cite
Text
Giannoulis et al. "Teaching Transformers to Solve Combinatorial Problems Through Efficient Trial & Error." Advances in Neural Information Processing Systems, 2025.Markdown
[Giannoulis et al. "Teaching Transformers to Solve Combinatorial Problems Through Efficient Trial & Error." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/giannoulis2025neurips-teaching/)BibTeX
@inproceedings{giannoulis2025neurips-teaching,
title = {{Teaching Transformers to Solve Combinatorial Problems Through Efficient Trial & Error}},
author = {Giannoulis, Panagiotis and Pantis, Yorgos and Tzamos, Christos},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/giannoulis2025neurips-teaching/}
}