Causally-Guided Regularization of Graph Attention Improves Generalizability
Abstract
Graph attention networks estimate the relational importance of node neighbors to aggregate relevant information over local neighborhoods for a prediction task. However, the inferred attentions are vulnerable to spurious correlations and connectivity in the training data, hampering the generalizability of models. We introduce CAR, a general-purpose regularization framework for graph attention networks. Embodying a causal inference approach based on invariance prediction, CAR aligns the attention mechanism with the causal effects of active interventions on graph connectivity in a scalable manner. CAR is compatible with a variety of graph attention architectures, and we show that it systematically improves generalizability on various node classification tasks. Our ablation studies indicate that CAR hones in on the aspects of graph structure most pertinent to the prediction (e.g., homophily), and does so more effectively than alternative approaches. Finally, we also show that \methodname enhances interpretability of attention coefficients by accentuating node-neighbor relations that point to causal hypotheses.
Cite
Text
Wu et al. "Causally-Guided Regularization of Graph Attention Improves Generalizability." Transactions on Machine Learning Research, 2023.Markdown
[Wu et al. "Causally-Guided Regularization of Graph Attention Improves Generalizability." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/wu2023tmlr-causallyguided/)BibTeX
@article{wu2023tmlr-causallyguided,
title = {{Causally-Guided Regularization of Graph Attention Improves Generalizability}},
author = {Wu, Alexander P and Markovich, Thomas and Berger, Bonnie and Hammerla, Nils Yannick and Singh, Rohit},
journal = {Transactions on Machine Learning Research},
year = {2023},
url = {https://mlanthology.org/tmlr/2023/wu2023tmlr-causallyguided/}
}