Shedding Light on Random Dropping and Oversmoothing

Abstract

Graph Neural Networks (GNNs) are widespread in graph representation learning. *Random dropping* approaches, notably DropEdge and DropMessage, claim to alleviate the key issues of overfitting and oversmoothing by randomly removing elements of the graph representation. However, their effectiveness is largely unverified. In this work, we show empirically that they have a limited effect in reducing oversmoothing at test time due to their training time exclusive nature. We show that DropEdge in particular can be seen as a form of training data augmentation, and its benefits to model generalization are not strictly related to oversmoothing, suggesting that in practice, the precise link between oversmoothing and performance is more nuanced than previously thought. We address the limitations of current dropping methods by *learning* to drop via optimizing an information bottleneck, which enables dropping to be performed effectively at test time.

Cite

Text

Xuanyuan et al. "Shedding Light on Random Dropping and Oversmoothing." NeurIPS 2023 Workshops: GLFrontiers, 2023.

Markdown

[Xuanyuan et al. "Shedding Light on Random Dropping and Oversmoothing." NeurIPS 2023 Workshops: GLFrontiers, 2023.](https://mlanthology.org/neuripsw/2023/xuanyuan2023neuripsw-shedding/)

BibTeX

@inproceedings{xuanyuan2023neuripsw-shedding,
  title     = {{Shedding Light on Random Dropping and Oversmoothing}},
  author    = {Xuanyuan, Han and Zhao, Tianxiang and Luo, Dongsheng},
  booktitle = {NeurIPS 2023 Workshops: GLFrontiers},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/xuanyuan2023neuripsw-shedding/}
}