Sampling for Bayesian Program Learning

Abstract

Towards learning programs from data, we introduce the problem of sampling programs from posterior distributions conditioned on that data. Within this setting, we propose an algorithm that uses a symbolic solver to efficiently sample programs. The proposal combines constraint-based program synthesis with sampling via random parity constraints. We give theoretical guarantees on how well the samples approximate the true posterior, and have empirical results showing the algorithm is efficient in practice, evaluating our approach on 22 program learning problems in the domains of text editing and computer-aided programming.

Cite

Text

Ellis et al. "Sampling for Bayesian Program Learning." Neural Information Processing Systems, 2016.

Markdown

[Ellis et al. "Sampling for Bayesian Program Learning." Neural Information Processing Systems, 2016.](https://mlanthology.org/neurips/2016/ellis2016neurips-sampling/)

BibTeX

@inproceedings{ellis2016neurips-sampling,
  title     = {{Sampling for Bayesian Program Learning}},
  author    = {Ellis, Kevin and Solar-Lezama, Armando and Tenenbaum, Josh},
  booktitle = {Neural Information Processing Systems},
  year      = {2016},
  pages     = {1297-1305},
  url       = {https://mlanthology.org/neurips/2016/ellis2016neurips-sampling/}
}