Continual Learning on Noisy Data Streams via Self-Purified Replay
Abstract
Continually learning in the real world must overcome many challenges, among which noisy labels are a common and inevitable issue. In this work, we present a replay-based continual learning framework that simultaneously addresses both catastrophic forgetting and noisy labels for the first time. Our solution is based on two observations; (i) forgetting can be mitigated even with noisy labels via self-supervised learning, and (ii) the purity of the replay buffer is crucial. Building on this regard, we propose two key components of our method: (i) a self-supervised replay technique named Self-Replay, which can circumvent erroneous training signals arising from noisy labeled data, and (ii) the Self-Centered filter that maintains a purified replay buffer via centrality-based stochastic graph ensembles. The empirical results on MNIST, CIFAR-10, CIFAR-100, and WebVision with real-world noise demonstrate that our framework can maintain a highly pure replay buffer amidst noisy streamed data while greatly outperforming the combinations of the state-of-the-art continual learning and noisy label learning methods.
Cite
Text
Kim et al. "Continual Learning on Noisy Data Streams via Self-Purified Replay." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.00058Markdown
[Kim et al. "Continual Learning on Noisy Data Streams via Self-Purified Replay." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/kim2021iccv-continual/) doi:10.1109/ICCV48922.2021.00058BibTeX
@inproceedings{kim2021iccv-continual,
title = {{Continual Learning on Noisy Data Streams via Self-Purified Replay}},
author = {Kim, Chris Dongjoo and Jeong, Jinseo and Moon, Sangwoo and Kim, Gunhee},
booktitle = {International Conference on Computer Vision},
year = {2021},
pages = {537-547},
doi = {10.1109/ICCV48922.2021.00058},
url = {https://mlanthology.org/iccv/2021/kim2021iccv-continual/}
}