Attentive Recurrent Comparators

Abstract

Rapid learning requires flexible representations to quickly adopt to new evidence. We develop a novel class of models called Attentive Recurrent Comparators (ARCs) that form representations of objects by cycling through them and making observations. Using the representations extracted by ARCs, we develop a way of approximating a dynamic representation space and use it for one-shot learning. In the task of one-shot classification on the Omniglot dataset, we achieve the state of the art performance with an error rate of 1.5\%. This represents the first super-human result achieved for this task with a generic model that uses only pixel information.

Cite

Text

Shyam et al. "Attentive Recurrent Comparators." International Conference on Machine Learning, 2017.

Markdown

[Shyam et al. "Attentive Recurrent Comparators." International Conference on Machine Learning, 2017.](https://mlanthology.org/icml/2017/shyam2017icml-attentive/)

BibTeX

@inproceedings{shyam2017icml-attentive,
  title     = {{Attentive Recurrent Comparators}},
  author    = {Shyam, Pranav and Gupta, Shubham and Dukkipati, Ambedkar},
  booktitle = {International Conference on Machine Learning},
  year      = {2017},
  pages     = {3173-3181},
  volume    = {70},
  url       = {https://mlanthology.org/icml/2017/shyam2017icml-attentive/}
}