Constrained Sampling for Language Models Should Be Easy: An MCMC Perspective
Abstract
Constrained decoding enables Language Models (LMs) to produce samples that provably satisfy hard constraints. However, existing constrained-decoding approaches often distort the underlying model distribution, a limitation that is especially problematic in applications like program fuzzing, where one wants to generate diverse and valid program inputs for testing purposes. We propose a new constrained sampling framework based on Markov Chain Monte Carlo (MCMC) that simultaneously satisfies three core desiderata: constraint satisfying (every sample satisfies the constraint), monotonically converging (the sampling process converges to the true conditional distribution), and efficient (high-quality samples emerge in few steps). Our method constructs a proposal distribution over valid outputs and applies a Metropolis-Hastings acceptance criterion based on the LM’s likelihood, ensuring principled and efficient exploration of the constrained space. Empirically, our sampler outperforms existing methods on both synthetic benchmarks and real-world program fuzzing tasks.
Cite
Text
Gonzalez et al. "Constrained Sampling for Language Models Should Be Easy: An MCMC Perspective." Advances in Neural Information Processing Systems, 2025.Markdown
[Gonzalez et al. "Constrained Sampling for Language Models Should Be Easy: An MCMC Perspective." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/gonzalez2025neurips-constrained/)BibTeX
@inproceedings{gonzalez2025neurips-constrained,
title = {{Constrained Sampling for Language Models Should Be Easy: An MCMC Perspective}},
author = {Gonzalez, Emmanuel Anaya and Vaidya, Sairam and Park, Kanghee and Ji, Ruyi and Berg-Kirkpatrick, Taylor and D'Antoni, Loris},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/gonzalez2025neurips-constrained/}
}