PLANS: Neuro-Symbolic Program Learning from Videos

Abstract

Recent years have seen the rise of statistical program learning based on neural models as an alternative to traditional rule-based systems for programming by example. Rule-based approaches offer correctness guarantees in an unsupervised way as they inherently capture logical rules, while neural models are more realistically scalable to raw, high-dimensional input, and provide resistance to noisy I/O specifications. We introduce PLANS (Program LeArning from Neurally inferred Specifications), a hybrid model for program synthesis from visual observations that gets the best of both worlds, relying on (i) a neural architecture trained to extract abstract, high-level information from each raw individual input (ii) a rule-based system using the extracted information as I/O specifications to synthesize a program capturing the different observations. In order to address the key challenge of making PLANS resistant to noise in the network's output, we introduce a dynamic filtering algorithm for I/O specifications based on selective classification techniques. We obtain state-of-the-art performance at program synthesis from diverse demonstration videos in the Karel and ViZDoom environments, while requiring no ground-truth program for training.

Cite

Text

Dang-Nhu. "PLANS: Neuro-Symbolic Program Learning from Videos." Neural Information Processing Systems, 2020.

Markdown

[Dang-Nhu. "PLANS: Neuro-Symbolic Program Learning from Videos." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/dangnhu2020neurips-plans/)

BibTeX

@inproceedings{dangnhu2020neurips-plans,
  title     = {{PLANS: Neuro-Symbolic Program Learning from Videos}},
  author    = {Dang-Nhu, Raphaël},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/dangnhu2020neurips-plans/}
}