Amortized Inference for Causal Structure Learning
Abstract
Learning causal structure poses a combinatorial search problem that typically involves evaluating structures with a score or independence test. The resulting search is costly, and designing suitable scores or tests that capture prior knowledge is difficult. In this work, we propose to amortize causal structure learning. Rather than searching over structures, we train a variational inference model to predict the causal structure from observational or interventional data. This allows us to bypass both the search over graphs and the hand-engineering of suitable score functions. Instead, our inference model acquires domain-specific inductive biases for causal discovery solely from data generated by a simulator. The architecture of our inference model emulates permutation invariances that are crucial for statistical efficiency in structure learning, which facilitates generalization to significantly larger problem instances than seen during training. On synthetic data and semisynthetic gene expression data, our models exhibit robust generalization capabilities when subject to substantial distribution shifts and significantly outperform existing algorithms, especially in the challenging genomics domain. Our code and models are publicly available at: https://github.com/larslorch/avici
Cite
Text
Lorch et al. "Amortized Inference for Causal Structure Learning." NeurIPS 2022 Workshops: CML4Impact, 2022.Markdown
[Lorch et al. "Amortized Inference for Causal Structure Learning." NeurIPS 2022 Workshops: CML4Impact, 2022.](https://mlanthology.org/neuripsw/2022/lorch2022neuripsw-amortized/)BibTeX
@inproceedings{lorch2022neuripsw-amortized,
title = {{Amortized Inference for Causal Structure Learning}},
author = {Lorch, Lars and Sussex, Scott and Rothfuss, Jonas and Krause, Andreas and Schölkopf, Bernhard},
booktitle = {NeurIPS 2022 Workshops: CML4Impact},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/lorch2022neuripsw-amortized/}
}