Toward Trustworthy Neural Program Synthesis
Abstract
We develop an approach to estimate the probability that a program sampled from a large language model is correct. Given a natural language description of a programming problem, our method samples both candidate programs as well as candidate predicates specifying how the program should behave. This allows learning a model that forms a well-calibrated probabilistic prediction of program correctness. Our system also infers the which predicates are useful to explain the behavior of the generated code, and humans preferred these in a human study over raw language model outputs. Our method is simple, easy to implement, and maintains state of the art generation accuracy results.
Cite
Text
Li et al. "Toward Trustworthy Neural Program Synthesis." ICLR 2025 Workshops: DL4C, 2025.Markdown
[Li et al. "Toward Trustworthy Neural Program Synthesis." ICLR 2025 Workshops: DL4C, 2025.](https://mlanthology.org/iclrw/2025/li2025iclrw-trustworthy/)BibTeX
@inproceedings{li2025iclrw-trustworthy,
title = {{Toward Trustworthy Neural Program Synthesis}},
author = {Li, Wen-Ding and Key, Darren Yan and Ellis, Kevin},
booktitle = {ICLR 2025 Workshops: DL4C},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/li2025iclrw-trustworthy/}
}