A Combinatorial Perspective on Transfer Learning

Abstract

Human intelligence is characterized not only by the capacity to learn complex skills, but the ability to rapidly adapt and acquire new skills within an ever-changing environment. In this work we study how the learning of modular solutions can allow for effective generalization to both unseen and potentially differently distributed data. Our main postulate is that the combination of task segmentation, modular learning and memory-based ensembling can give rise to generalization on an exponentially growing number of unseen tasks. We provide a concrete instantiation of this idea using a combination of: (1) the Forget-Me-Not Process, for task segmentation and memory based ensembling; and (2) Gated Linear Networks, which in contrast to contemporary deep learning techniques use a modular and local learning mechanism. We demonstrate that this system exhibits a number of desirable continual learning properties: robustness to catastrophic forgetting, no negative transfer and increasing levels of positive transfer as more tasks are seen. We show competitive performance against both offline and online methods on standard continual learning benchmarks.

Cite

Text

Wang et al. "A Combinatorial Perspective on Transfer Learning." Neural Information Processing Systems, 2020.

Markdown

[Wang et al. "A Combinatorial Perspective on Transfer Learning." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/wang2020neurips-combinatorial/)

BibTeX

@inproceedings{wang2020neurips-combinatorial,
  title     = {{A Combinatorial Perspective on Transfer Learning}},
  author    = {Wang, Jianan and Sezener, Eren and Budden, David and Hutter, Marcus and Veness, Joel},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/wang2020neurips-combinatorial/}
}