Learn from Concepts: Towards the Purified Memory for Few-Shot Learning

Abstract

Human beings have a great generalization ability to recognize a novel category by only seeing a few number of samples. This is because humans possess the ability to learn from the concepts that already exist in our minds. However, many existing few-shot approaches fail in addressing such a fundamental problem, {\it i.e.,} how to utilize the knowledge learned in the past to improve the prediction for the new task. In this paper, we present a novel purified memory mechanism that simulates the recognition process of human beings. This new memory updating scheme enables the model to purify the information from semantic labels and progressively learn consistent, stable, and expressive concepts when episodes are trained one by one. On its basis, a Graph Augmentation Module (GAM) is introduced to aggregate these concepts and knowledge learned from new tasks via a graph neural network, making the prediction more accurate. Generally, our approach is model-agnostic and computing efficient with negligible memory cost. Extensive experiments performed on several benchmarks demonstrate the proposed method can consistently outperform a vast number of state-of-the-art few-shot learning methods.

Cite

Text

Liu et al. "Learn from Concepts: Towards the Purified Memory for Few-Shot Learning." International Joint Conference on Artificial Intelligence, 2021. doi:10.24963/IJCAI.2021/123

Markdown

[Liu et al. "Learn from Concepts: Towards the Purified Memory for Few-Shot Learning." International Joint Conference on Artificial Intelligence, 2021.](https://mlanthology.org/ijcai/2021/liu2021ijcai-learn/) doi:10.24963/IJCAI.2021/123

BibTeX

@inproceedings{liu2021ijcai-learn,
  title     = {{Learn from Concepts: Towards the Purified Memory for Few-Shot Learning}},
  author    = {Liu, Xuncheng and Tian, Xudong and Lin, Shaohui and Qu, Yanyun and Ma, Lizhuang and Yuan, Wang and Zhang, Zhizhong and Xie, Yuan},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {888-894},
  doi       = {10.24963/IJCAI.2021/123},
  url       = {https://mlanthology.org/ijcai/2021/liu2021ijcai-learn/}
}