Calibrating and Improving Graph Contrastive Learning
Abstract
Graph contrastive learning algorithms have demonstrated remarkable success in various applications such as node classification, link prediction, and graph clustering. However, in unsupervised graph contrastive learning, some contrastive pairs may contradict the truths in downstream tasks and thus the decrease of losses on these pairs undesirably harms the performance in the downstream tasks. To assess the discrepancy between the prediction and the ground-truth in the downstream tasks for these contrastive pairs, we adapt expected calibration error (ECE) to graph contrastive learning. The analysis of ECE motivates us to propose a novel regularization method, Contrast-Reg, to ensure that decreasing the contrastive loss leads to better performance in the downstream tasks. As a plug-in regularizer, Contrast-Reg effectively improves the performance of existing graph contrastive learning algorithms. We provide both theoretical and empirical results to demonstrate the effectiveness of Contrast-Reg in enhancing the generalizability of the Graph Neural Network (GNN) model and improving the performance of graph contrastive algorithms with different similarity definitions and encoder backbones across various downstream tasks.
Cite
Text
Kaili et al. "Calibrating and Improving Graph Contrastive Learning." Transactions on Machine Learning Research, 2023.Markdown
[Kaili et al. "Calibrating and Improving Graph Contrastive Learning." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/kaili2023tmlr-calibrating/)BibTeX
@article{kaili2023tmlr-calibrating,
title = {{Calibrating and Improving Graph Contrastive Learning}},
author = {Kaili, Ma and Yang, Garry and Yang, Han and Chen, Yongqiang and Cheng, James},
journal = {Transactions on Machine Learning Research},
year = {2023},
url = {https://mlanthology.org/tmlr/2023/kaili2023tmlr-calibrating/}
}