Forget-Free Continual Learning with Winning Subnetworks
Abstract
Inspired by Lottery Ticket Hypothesis that competitive subnetworks exist within a dense network, we propose a continual learning method referred to as Winning SubNetworks (WSN), which sequentially learns and selects an optimal subnetwork for each task. Specifically, WSN jointly learns the model weights and task-adaptive binary masks pertaining to subnetworks associated with each task whilst attempting to select a small set of weights to be activated (winning ticket) by reusing weights of the prior subnetworks. The proposed method is inherently immune to catastrophic forgetting as each selected subnetwork model does not infringe upon other subnetworks. Binary masks spawned per winning ticket are encoded into one N-bit binary digit mask, then compressed using Huffman coding for a sub-linear increase in network capacity with respect to the number of tasks.
Cite
Text
Kang et al. "Forget-Free Continual Learning with Winning Subnetworks." International Conference on Machine Learning, 2022.Markdown
[Kang et al. "Forget-Free Continual Learning with Winning Subnetworks." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/kang2022icml-forgetfree/)BibTeX
@inproceedings{kang2022icml-forgetfree,
title = {{Forget-Free Continual Learning with Winning Subnetworks}},
author = {Kang, Haeyong and Mina, Rusty John Lloyd and Madjid, Sultan Rizky Hikmawan and Yoon, Jaehong and Hasegawa-Johnson, Mark and Hwang, Sung Ju and Yoo, Chang D.},
booktitle = {International Conference on Machine Learning},
year = {2022},
pages = {10734-10750},
volume = {162},
url = {https://mlanthology.org/icml/2022/kang2022icml-forgetfree/}
}