Learning to Infer Generative Template Programs for Visual Concepts
Abstract
People grasp flexible visual concepts from a few examples. We explore a neurosymbolic system that learns how to infer programs that capture visual concepts in a domain-general fashion. We introduce Template Programs: programmatic expressions from a domain-specific language that specify structural and parametric patterns common to an input concept. Our framework supports multiple concept-related tasks, including few-shot generation and co-segmentation through parsing. We develop a learning paradigm that allows us to train networks that infer Template Programs directly from visual datasets that contain concept groupings. We run experiments across multiple visual domains: 2D layouts, Omniglot characters, and 3D shapes. We find that our method outperforms task-specific alternatives, and performs competitively against domain-specific approaches for the limited domains where they exist.
Cite
Text
Jones et al. "Learning to Infer Generative Template Programs for Visual Concepts." International Conference on Machine Learning, 2024.Markdown
[Jones et al. "Learning to Infer Generative Template Programs for Visual Concepts." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/jones2024icml-learning/)BibTeX
@inproceedings{jones2024icml-learning,
title = {{Learning to Infer Generative Template Programs for Visual Concepts}},
author = {Jones, R. Kenny and Chaudhuri, Siddhartha and Ritchie, Daniel},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {22465-22490},
volume = {235},
url = {https://mlanthology.org/icml/2024/jones2024icml-learning/}
}