Finding Latent Causes in Causal Networks: An Efficient Approach Based on Markov Blankets
Abstract
Causal structure-discovery techniques usually assume that all causes of more than one variable are observed. This is the so-called causal sufficiency assumption. In practice, it is untestable, and often violated. In this paper, we present an efficient causal structure-learning algorithm, suited for causally insufficient data. Similar to algorithms such as IC* and FCI, the proposed approach drops the causal sufficiency assumption and learns a structure that indicates (potential) latent causes for pairs of observed variables. Assuming a constant local density of the data-generating graph, our algorithm makes a quadratic number of conditional-independence tests w.r.t. the number of variables. We show with experiments that our algorithm is comparable to the state-of-the-art FCI algorithm in accuracy, while being several orders of magnitude faster on large problems. We conclude that MBCS* makes a new range of causally insufficient problems computationally tractable.
Cite
Text
Pellet and Elisseeff. "Finding Latent Causes in Causal Networks: An Efficient Approach Based on Markov Blankets." Neural Information Processing Systems, 2008.Markdown
[Pellet and Elisseeff. "Finding Latent Causes in Causal Networks: An Efficient Approach Based on Markov Blankets." Neural Information Processing Systems, 2008.](https://mlanthology.org/neurips/2008/pellet2008neurips-finding/)BibTeX
@inproceedings{pellet2008neurips-finding,
title = {{Finding Latent Causes in Causal Networks: An Efficient Approach Based on Markov Blankets}},
author = {Pellet, Jean-philippe and Elisseeff, André},
booktitle = {Neural Information Processing Systems},
year = {2008},
pages = {1249-1256},
url = {https://mlanthology.org/neurips/2008/pellet2008neurips-finding/}
}