Learning Feed-Forward One-Shot Learners

Abstract

One-shot learning is usually tackled by using generative models or discriminative embeddings. Discriminative methods based on deep learning, which are very effective in other learning scenarios, are ill-suited for one-shot learning as they need large amounts of training data. In this paper, we propose a method to learn the parameters of a deep model in one shot. We construct the learner as a second deep network, called a learnet, which predicts the parameters of a pupil network from a single exemplar. In this manner we obtain an efficient feed-forward one-shot learner, trained end-to-end by minimizing a one-shot classification objective in a learning to learn formulation. In order to make the construction feasible, we propose a number of factorizations of the parameters of the pupil network. We demonstrate encouraging results by learning characters from single exemplars in Omniglot, and by tracking visual objects from a single initial exemplar in the Visual Object Tracking benchmark.

Cite

Text

Bertinetto et al. "Learning Feed-Forward One-Shot Learners." Neural Information Processing Systems, 2016.

Markdown

[Bertinetto et al. "Learning Feed-Forward One-Shot Learners." Neural Information Processing Systems, 2016.](https://mlanthology.org/neurips/2016/bertinetto2016neurips-learning/)

BibTeX

@inproceedings{bertinetto2016neurips-learning,
  title     = {{Learning Feed-Forward One-Shot Learners}},
  author    = {Bertinetto, Luca and Henriques, João F. and Valmadre, Jack and Torr, Philip and Vedaldi, Andrea},
  booktitle = {Neural Information Processing Systems},
  year      = {2016},
  pages     = {523-531},
  url       = {https://mlanthology.org/neurips/2016/bertinetto2016neurips-learning/}
}