Continual Learning via Neural Pruning
Abstract
Inspired by the modularity and the life-cycle of biological neurons,we introduce Continual Learning via Neural Pruning (CLNP), a new method aimed at lifelong learning in fixed capacity models based on the pruning of neurons of low activity. In this method, an L1 regulator is used to promote the presence of neurons of zero or low activity whose connections to previously active neurons is permanently severed at the end of training. Subsequent tasks are trained using these pruned neurons after reinitialization and cause zero deterioration to the performance of previous tasks. We show empirically that this biologically inspired method leads to state of the art results beating or matching current methods of higher computational complexity.
Cite
Text
Golkar et al. "Continual Learning via Neural Pruning." NeurIPS 2019 Workshops: Neuro_AI, 2019.Markdown
[Golkar et al. "Continual Learning via Neural Pruning." NeurIPS 2019 Workshops: Neuro_AI, 2019.](https://mlanthology.org/neuripsw/2019/golkar2019neuripsw-continual/)BibTeX
@inproceedings{golkar2019neuripsw-continual,
title = {{Continual Learning via Neural Pruning}},
author = {Golkar, Siavash and Kagan, Micheal and Cho, Kyunghyun},
booktitle = {NeurIPS 2019 Workshops: Neuro_AI},
year = {2019},
url = {https://mlanthology.org/neuripsw/2019/golkar2019neuripsw-continual/}
}