Guiding Program Synthesis by Learning to Generate Examples

Abstract

A key challenge of existing program synthesizers is ensuring that the synthesized program generalizes well. This can be difficult to achieve as the specification provided by the end user is often limited, containing as few as one or two input-output examples. In this paper we address this challenge via an iterative approach that finds ambiguities in the provided specification and learns to resolve these by generating additional input-output examples. The main insight is to reduce the problem of selecting which program generalizes well to the simpler task of deciding which output is correct. As a result, to train our probabilistic models, we can take advantage of the large amounts of data in the form of program outputs, which are often much easier to obtain than the corresponding ground-truth programs.

Cite

Text

Laich et al. "Guiding Program Synthesis by Learning to Generate Examples." International Conference on Learning Representations, 2020.

Markdown

[Laich et al. "Guiding Program Synthesis by Learning to Generate Examples." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/laich2020iclr-guiding/)

BibTeX

@inproceedings{laich2020iclr-guiding,
  title     = {{Guiding Program Synthesis by Learning to Generate Examples}},
  author    = {Laich, Larissa and Bielik, Pavol and Vechev, Martin},
  booktitle = {International Conference on Learning Representations},
  year      = {2020},
  url       = {https://mlanthology.org/iclr/2020/laich2020iclr-guiding/}
}