Overcoming Catastrophic Forgetting in Graph Neural Networks with Experience Replay
Abstract
Graph Neural Networks (GNNs) have recently received significant research attention due to their superior performance on a variety of graph-related learning tasks. Most of the current works focus on either static or dynamic graph settings, addressing a single particular task, e.g., node/graph classification, link prediction. In this work, we investigate the question: can GNNs be applied to continuously learning a sequence of tasks? Towards that, we explore the Continual Graph Learning (CGL) paradigm and present the Experience Replay based framework ER-GNN for CGL to alleviate the catastrophic forgetting problem in existing GNNs. ER-GNN stores knowledge from previous tasks as experiences and replays them when learning new tasks to mitigate the catastrophic forgetting issue. We propose three experience node selection strategies: mean of feature, coverage maximization, and influence maximization, to guide the process of selecting experience nodes. Extensive experiments on three benchmark datasets demonstrate the effectiveness of our ER-GNN and shed light on the incremental graph (non-Euclidean) structure learning.
Cite
Text
Zhou and Cao. "Overcoming Catastrophic Forgetting in Graph Neural Networks with Experience Replay." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I5.16602Markdown
[Zhou and Cao. "Overcoming Catastrophic Forgetting in Graph Neural Networks with Experience Replay." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/zhou2021aaai-overcoming/) doi:10.1609/AAAI.V35I5.16602BibTeX
@inproceedings{zhou2021aaai-overcoming,
title = {{Overcoming Catastrophic Forgetting in Graph Neural Networks with Experience Replay}},
author = {Zhou, Fan and Cao, Chengtai},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2021},
pages = {4714-4722},
doi = {10.1609/AAAI.V35I5.16602},
url = {https://mlanthology.org/aaai/2021/zhou2021aaai-overcoming/}
}