Differentiable Programs with Neural Libraries

Abstract

We develop a framework for combining differentiable programming languages with neural networks. Using this framework we create end-to-end trainable systems that learn to write interpretable algorithms with perceptual components. We explore the benefits of inductive biases for strong generalization and modularity that come from the program-like structure of our models. In particular, modularity allows us to learn a library of (neural) functions which grows and improves as more tasks are solved. Empirically, we show that this leads to lifelong learning systems that transfer knowledge to new tasks more effectively than baselines.

Cite

Text

Gaunt et al. "Differentiable Programs with Neural Libraries." International Conference on Machine Learning, 2017.

Markdown

[Gaunt et al. "Differentiable Programs with Neural Libraries." International Conference on Machine Learning, 2017.](https://mlanthology.org/icml/2017/gaunt2017icml-differentiable/)

BibTeX

@inproceedings{gaunt2017icml-differentiable,
  title     = {{Differentiable Programs with Neural Libraries}},
  author    = {Gaunt, Alexander L. and Brockschmidt, Marc and Kushman, Nate and Tarlow, Daniel},
  booktitle = {International Conference on Machine Learning},
  year      = {2017},
  pages     = {1213-1222},
  volume    = {70},
  url       = {https://mlanthology.org/icml/2017/gaunt2017icml-differentiable/}
}