Learning to Learn Variational Semantic Memory

Abstract

In this paper, we introduce variational semantic memory into meta-learning to acquire long-term knowledge for few-shot learning. The variational semantic memory accrues and stores semantic information for the probabilistic inference of class prototypes in a hierarchical Bayesian framework. The semantic memory is grown from scratch and gradually consolidated by absorbing information from tasks it experiences. By doing so, it is able to accumulate long-term, general knowledge that enables it to learn new concepts of objects. We formulate memory recall as the variational inference of a latent memory variable from addressed contents, which offers a principled way to adapt the knowledge to individual tasks. Our variational semantic memory, as a new long-term memory module, confers principled recall and update mechanisms that enable semantic information to be efficiently accrued and adapted for few-shot learning. Experiments demonstrate that the probabilistic modelling of prototypes achieves a more informative representation of object classes compared to deterministic vectors. The consistent new state-of-the-art performance on four benchmarks shows the benefit of variational semantic memory in boosting few-shot recognition.

Cite

Text

Zhen et al. "Learning to Learn Variational Semantic Memory." Neural Information Processing Systems, 2020.

Markdown

[Zhen et al. "Learning to Learn Variational Semantic Memory." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/zhen2020neurips-learning/)

BibTeX

@inproceedings{zhen2020neurips-learning,
  title     = {{Learning to Learn Variational Semantic Memory}},
  author    = {Zhen, Xiantong and Du, Yingjun and Xiong, Huan and Qiu, Qiang and Snoek, Cees and Shao, Ling},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/zhen2020neurips-learning/}
}