SketchEmbedNet: Learning Novel Concepts by Imitating Drawings
Abstract
Sketch drawings capture the salient information of visual concepts. Previous work has shown that neural networks are capable of producing sketches of natural objects drawn from a small number of classes. While earlier approaches focus on generation quality or retrieval, we explore properties of image representations learned by training a model to produce sketches of images. We show that this generative, class-agnostic model produces informative embeddings of images from novel examples, classes, and even novel datasets in a few-shot setting. Additionally, we find that these learned representations exhibit interesting structure and compositionality.
Cite
Text
Wang et al. "SketchEmbedNet: Learning Novel Concepts by Imitating Drawings." International Conference on Machine Learning, 2021.Markdown
[Wang et al. "SketchEmbedNet: Learning Novel Concepts by Imitating Drawings." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/wang2021icml-sketchembednet/)BibTeX
@inproceedings{wang2021icml-sketchembednet,
title = {{SketchEmbedNet: Learning Novel Concepts by Imitating Drawings}},
author = {Wang, Alexander and Ren, Mengye and Zemel, Richard},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {10870-10881},
volume = {139},
url = {https://mlanthology.org/icml/2021/wang2021icml-sketchembednet/}
}