Learning with Selective Forgetting

Abstract

Lifelong learning aims to train a highly expressive model for a new task while retaining all knowledge for previous tasks. However, many practical scenarios do not always require the system to remember all of the past knowledge. Instead, ethical considerations call for selective and proactive forgetting of undesirable knowledge in order to prevent privacy issues and data leakage. In this paper, we propose a new framework for lifelong learning, called Learning with Selective Forgetting, which is to update a model for the new task with forgetting only the selected classes of the previous tasks while maintaining the rest. The key is to introduce a class-specific synthetic signal called mnemonic code. The codes are "watermarked" on all the training samples of the corresponding classes when the model is updated for a new task. This enables us to forget arbitrary classes later by only using the mnemonic codes without using the original data. Experiments on common benchmark datasets demonstrate the remarkable superiority of the proposed method over several existing methods.

Cite

Text

Shibata et al. "Learning with Selective Forgetting." International Joint Conference on Artificial Intelligence, 2021. doi:10.24963/IJCAI.2021/137

Markdown

[Shibata et al. "Learning with Selective Forgetting." International Joint Conference on Artificial Intelligence, 2021.](https://mlanthology.org/ijcai/2021/shibata2021ijcai-learning/) doi:10.24963/IJCAI.2021/137

BibTeX

@inproceedings{shibata2021ijcai-learning,
  title     = {{Learning with Selective Forgetting}},
  author    = {Shibata, Takashi and Irie, Go and Ikami, Daiki and Mitsuzumi, Yu},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {989-996},
  doi       = {10.24963/IJCAI.2021/137},
  url       = {https://mlanthology.org/ijcai/2021/shibata2021ijcai-learning/}
}