Human-like Sketch Object Recognition via Analogical Learning

Abstract

Deep learning systems can perform well on some image recognition tasks. However, they have serious limitations, including requiring far more training data than humans do and being fooled by adversarial examples. By contrast, analogical learning over relational representations tends to be far more data-efficient, requiring only human-like amounts of training data. This paper introduces an approach that combines automatically constructed qualitative visual representations with analogical learning to tackle a hard computer vision problem, object recognition from sketches. Results from the MNIST dataset and a novel dataset, the Coloring Book Objects dataset, are provided. Comparison to existing approaches indicates that analogical generalization can be used to identify sketched objects from these datasets with several orders of magnitude fewer examples than deep learning systems require.

Cite

Text

Chen et al. "Human-like Sketch Object Recognition via Analogical Learning." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.33011336

Markdown

[Chen et al. "Human-like Sketch Object Recognition via Analogical Learning." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/chen2019aaai-human/) doi:10.1609/AAAI.V33I01.33011336

BibTeX

@inproceedings{chen2019aaai-human,
  title     = {{Human-like Sketch Object Recognition via Analogical Learning}},
  author    = {Chen, Kezhen and Rabkina, Irina and McLure, Matthew D. and Forbus, Kenneth D.},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {1336-1343},
  doi       = {10.1609/AAAI.V33I01.33011336},
  url       = {https://mlanthology.org/aaai/2019/chen2019aaai-human/}
}