Amortized Inference of Causal Models via Conditional Fixed-Point Iterations

Abstract

Structural Causal Models (SCMs) offer a principled framework to reason about interventions and support out-of-distribution generalization, which are key goals in scientific discovery. However, the task of learning SCMs from observed data poses formidable challenges, and often requires training a separate model for each dataset. In this work, we propose an amortized inference framework that trains a single model to predict the causal mechanisms of SCMs conditioned on their observational data and causal graph. We first use a transformer-based architecture for amortized learning of dataset embeddings, and then extend the Fixed-Point Approach (FiP) to infer the causal mechanisms conditionally on their dataset embeddings. As a byproduct, our method can generate observational and interventional data from novel SCMs at inference time, without updating parameters. Empirical results show that our amortized procedure performs on par with baselines trained specifically for each dataset on both in and out-of-distribution problems, and also outperforms them in scare data regimes.

Cite

Text

Mahajan et al. "Amortized Inference of Causal Models via Conditional Fixed-Point Iterations." Transactions on Machine Learning Research, 2025.

Markdown

[Mahajan et al. "Amortized Inference of Causal Models via Conditional Fixed-Point Iterations." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/mahajan2025tmlr-amortized/)

BibTeX

@article{mahajan2025tmlr-amortized,
  title     = {{Amortized Inference of Causal Models via Conditional Fixed-Point Iterations}},
  author    = {Mahajan, Divyat and Gladrow, Jannes and Hilmkil, Agrin and Zhang, Cheng and Scetbon, Meyer},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/mahajan2025tmlr-amortized/}
}