Stable Differentiable Causal Discovery
Abstract
Inferring causal relationships as directed acyclic graphs (DAGs) is an important but challenging problem. Differentiable Causal Discovery (DCD) is a promising approach to this problem, framing the search as a continuous optimization. But existing DCD methods are numerically unstable, with poor performance beyond tens of variables. In this paper, we propose Stable Differentiable Causal Discovery (SDCD), a new method that improves previous DCD methods in two ways: (1) It employs an alternative constraint for acyclicity; this constraint is more stable, both theoretically and empirically, and fast to compute. (2) It uses a training procedure tailored for sparse causal graphs, which are common in real-world scenarios. We first derive SDCD and prove its stability and correctness. We then evaluate it with observational and interventional data and in both small and large scale settings. We find SDCD outperforms existing methods in convergence speed and accuracy, and can scale to thousands of variables.
Cite
Text
Nazaret et al. "Stable Differentiable Causal Discovery." ICML 2024 Workshops: Differentiable_Almost_Everything, 2024.Markdown
[Nazaret et al. "Stable Differentiable Causal Discovery." ICML 2024 Workshops: Differentiable_Almost_Everything, 2024.](https://mlanthology.org/icmlw/2024/nazaret2024icmlw-stable/)BibTeX
@inproceedings{nazaret2024icmlw-stable,
title = {{Stable Differentiable Causal Discovery}},
author = {Nazaret, Achille and Hong, Justin and Azizi, Elham and Blei, David},
booktitle = {ICML 2024 Workshops: Differentiable_Almost_Everything},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/nazaret2024icmlw-stable/}
}