A Closer Look at Distribution Shifts and Out-of-Distribution Generalization on Graphs
Abstract
Distribution shifts, in which the training distribution differs from the testing distribution, can significantly degrade the performance of Graph Neural Networks (GNNs). We curate GDS, a benchmark of eight datasets reflecting a diverse range of distribution shifts across graphs. We observe that: (1) most domain generalization algorithms fail to work when applied to domain shifts on graphs; and (2) combinations of powerful GNN models and augmentation techniques usually achieve the best out-of-distribution performance. These emphasize the need for domain generalization algorithms tailored for graphs and further graph augmentation techniques that enhance the robustness of predictors.
Cite
Text
Ding et al. "A Closer Look at Distribution Shifts and Out-of-Distribution Generalization on Graphs." NeurIPS 2021 Workshops: DistShift, 2021.Markdown
[Ding et al. "A Closer Look at Distribution Shifts and Out-of-Distribution Generalization on Graphs." NeurIPS 2021 Workshops: DistShift, 2021.](https://mlanthology.org/neuripsw/2021/ding2021neuripsw-closer/)BibTeX
@inproceedings{ding2021neuripsw-closer,
title = {{A Closer Look at Distribution Shifts and Out-of-Distribution Generalization on Graphs}},
author = {Ding, Mucong and Kong, Kezhi and Chen, Jiuhai and Kirchenbauer, John and Goldblum, Micah and Wipf, David and Huang, Furong and Goldstein, Tom},
booktitle = {NeurIPS 2021 Workshops: DistShift},
year = {2021},
url = {https://mlanthology.org/neuripsw/2021/ding2021neuripsw-closer/}
}