Stable Differentiable Causal Discovery

Abstract

Inferring causal relationships as directed acyclic graphs (DAGs) is an important but challenging problem. Differentiable Causal Discovery (DCD) is a promising approach to this problem, framing the search as a continuous optimization. But existing DCD methods are numerically unstable, with poor performance beyond tens of variables. In this paper, we propose Stable Differentiable Causal Discovery (SDCD), a new method that improves previous DCD methods in two ways: (1) It employs an alternative constraint for acyclicity; this constraint is more stable, both theoretically and empirically, and fast to compute. (2) It uses a training procedure tailored for sparse causal graphs, which are common in real-world scenarios. We first derive SDCD and prove its stability and correctness. We then evaluate it with both observational and interventional data and in both small-scale and large-scale settings. We find that SDCD outperforms existing methods in convergence speed and accuracy, and can scale to thousands of variables.

Cite

Text

Nazaret et al. "Stable Differentiable Causal Discovery." International Conference on Machine Learning, 2024.

Markdown

[Nazaret et al. "Stable Differentiable Causal Discovery." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/nazaret2024icml-stable/)

BibTeX

@inproceedings{nazaret2024icml-stable,
  title     = {{Stable Differentiable Causal Discovery}},
  author    = {Nazaret, Achille and Hong, Justin and Azizi, Elham and Blei, David},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {37413-37445},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/nazaret2024icml-stable/}
}