Learning to Infer Graphics Programs from Hand-Drawn Images
Abstract
We introduce a model that learns to convert simple hand drawings into graphics programs written in a subset of \LaTeX.~The model combines techniques from deep learning and program synthesis. We learn a convolutional neural network that proposes plausible drawing primitives that explain an image. These drawing primitives are a specification (spec) of what the graphics program needs to draw. We learn a model that uses program synthesis techniques to recover a graphics program from that spec. These programs have constructs like variable bindings, iterative loops, or simple kinds of conditionals. With a graphics program in hand, we can correct errors made by the deep network and extrapolate drawings.
Cite
Text
Ellis et al. "Learning to Infer Graphics Programs from Hand-Drawn Images." Neural Information Processing Systems, 2018.Markdown
[Ellis et al. "Learning to Infer Graphics Programs from Hand-Drawn Images." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/ellis2018neurips-learning/)BibTeX
@inproceedings{ellis2018neurips-learning,
title = {{Learning to Infer Graphics Programs from Hand-Drawn Images}},
author = {Ellis, Kevin and Ritchie, Daniel and Solar-Lezama, Armando and Tenenbaum, Josh},
booktitle = {Neural Information Processing Systems},
year = {2018},
pages = {6059-6068},
url = {https://mlanthology.org/neurips/2018/ellis2018neurips-learning/}
}