Neural Architecture Search of Deep Priors: Towards Continual Learning Without Catastrophic Interference

Abstract

In this paper we analyze the classification performance of neural network structures without parametric inference. Making use of neural architecture search, we empirically demonstrate that it is possible to find random weight architectures, a deep prior, that enables a linear classification to perform on par with fully trained deep counterparts. Through ablation experiments, we exclude the possibility of winning a weight initialization lottery and confirm that suitable deep priors do not require additional inference. In an extension to continual learning, we investigate the possibility of catastrophic interference free incremental learning. Under the assumption of classes originating from the same data distribution, a deep prior found on only a subset of classes is shown to allow discrimination of further classes through training of a simple linear classifier.

Cite

Text

Mundt et al. "Neural Architecture Search of Deep Priors: Towards Continual Learning Without Catastrophic Interference." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021. doi:10.1109/CVPRW53098.2021.00391

Markdown

[Mundt et al. "Neural Architecture Search of Deep Priors: Towards Continual Learning Without Catastrophic Interference." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021.](https://mlanthology.org/cvprw/2021/mundt2021cvprw-neural/) doi:10.1109/CVPRW53098.2021.00391

BibTeX

@inproceedings{mundt2021cvprw-neural,
  title     = {{Neural Architecture Search of Deep Priors: Towards Continual Learning Without Catastrophic Interference}},
  author    = {Mundt, Martin and Pliushch, Iuliia and Ramesh, Visvanathan},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2021},
  pages     = {3523-3532},
  doi       = {10.1109/CVPRW53098.2021.00391},
  url       = {https://mlanthology.org/cvprw/2021/mundt2021cvprw-neural/}
}