Information-Theoretic Online Memory Selection for Continual Learning

Abstract

A challenging problem in task-free continual learning is the online selection of a representative replay memory from data streams. In this work, we investigate the online memory selection problem from an information-theoretic perspective. To gather the most information, we propose the \textit{surprise} and the \textit{learnability} criteria to pick informative points and to avoid outliers. We present a Bayesian model to compute the criteria efficiently by exploiting rank-one matrix structures. We demonstrate that these criteria encourage selecting informative points in a greedy algorithm for online memory selection. Furthermore, by identifying the importance of \textit{the timing to update the memory}, we introduce a stochastic information-theoretic reservoir sampler (InfoRS), which conducts sampling among selective points with high information. Compared to reservoir sampling, InfoRS demonstrates improved robustness against data imbalance. Finally, empirical performances over continual learning benchmarks manifest its efficiency and efficacy.

Cite

Text

Sun et al. "Information-Theoretic Online Memory Selection for Continual Learning." International Conference on Learning Representations, 2022.

Markdown

[Sun et al. "Information-Theoretic Online Memory Selection for Continual Learning." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/sun2022iclr-informationtheoretic/)

BibTeX

@inproceedings{sun2022iclr-informationtheoretic,
  title     = {{Information-Theoretic Online Memory Selection for Continual Learning}},
  author    = {Sun, Shengyang and Calandriello, Daniele and Hu, Huiyi and Li, Ang and Titsias, Michalis},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/sun2022iclr-informationtheoretic/}
}