Invariant Structure Learning for Better Generalization and Causal Explainability
Abstract
Learning the causal structure behind data is invaluable for improving generalization and ob- taining high-quality explanations. Towards this end, we propose a novel framework, Invariant Structure Learning (ISL), that is designed to improve causal structure discovery by utilizing generalization as an indication in the process. ISL splits the data into different environments, and learns a structure that is invariant to the target across different environments by imposing a consistency constraint. The proposed aggregation mechanism then selects the classifier based on a graph structure that reflects the causal mechanisms in the data more accurately compared to the structures learnt from individual environments. Furthermore, we extend ISL to a self-supervised learning setting, where accurate causal structure discovery does not rely on any labels. Self-supervised ISL utilizes proposals for invariant causality, by iteratively setting different nodes as targets. On synthetic and real-world datasets, we demonstrate that ISL accurately discovers the causal structure, outperforms alternative methods, and yields superior generalization for datasets with significant distribution shifts.
Cite
Text
Ge et al. "Invariant Structure Learning for Better Generalization and Causal Explainability." Transactions on Machine Learning Research, 2023.Markdown
[Ge et al. "Invariant Structure Learning for Better Generalization and Causal Explainability." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/ge2023tmlr-invariant/)BibTeX
@article{ge2023tmlr-invariant,
title = {{Invariant Structure Learning for Better Generalization and Causal Explainability}},
author = {Ge, Yunhao and Arik, Sercan O and Yoon, Jinsung and Xu, Ao and Itti, Laurent and Pfister, Tomas},
journal = {Transactions on Machine Learning Research},
year = {2023},
url = {https://mlanthology.org/tmlr/2023/ge2023tmlr-invariant/}
}