Learning Sparse Causal Models Is Not NP-Hard

Abstract

This paper shows that causal model discovery is not an NP-hard problem, in the sense that for sparse graphs bounded by node degree k the sound and complete causal model can be obtained in worst case order N2(k+2) independence tests, even when latent variables and selection bias may be present. We present a modification of the well-known FCI algorithm that implements the method for an independence oracle, and suggest improvements for sample/real-world data versions. It does not contradict any known hardness results, and does not solve an NP-hard problem: it just proves that sparse causal discovery is perhaps more complicated, but not as hard as learning minimal Bayesian networks.

Cite

Text

Claassen et al. "Learning Sparse Causal Models Is Not NP-Hard." Conference on Uncertainty in Artificial Intelligence, 2013.

Markdown

[Claassen et al. "Learning Sparse Causal Models Is Not NP-Hard." Conference on Uncertainty in Artificial Intelligence, 2013.](https://mlanthology.org/uai/2013/claassen2013uai-learning/)

BibTeX

@inproceedings{claassen2013uai-learning,
  title     = {{Learning Sparse Causal Models Is Not NP-Hard}},
  author    = {Claassen, Tom and Mooij, Joris M. and Heskes, Tom},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {2013},
  url       = {https://mlanthology.org/uai/2013/claassen2013uai-learning/}
}