On the Number of Experiments Sufficient and in the Worst Case Necessary to Identify All Causal Relations Among N Variables
Abstract
We show that if any number of variables are allowed to be simultaneously and independently randomized in any one experiment, log2(N) + 1 experiments are sufficient and in the worst case necessary to determine the causal relations among N >= 2 variables when no latent variables, no sample selection bias and no feedback cycles are present. For all K, 0 < K < 1/(2N) we provide an upper bound on the number experiments required to determine causal structure when each experiment simultaneously randomizes K variables. For large N, these bounds are significantly lower than the N - 1 bound required when each experiment randomizes at most one variable. For kmax < N/2, we show that (N/kmax-1)+N/(2kmax)log2(kmax) experiments aresufficient and in the worst case necessary. We over a conjecture as to the minimal number of experiments that are in the worst case sufficient to identify all causal relations among N observed variables that are a subset of the vertices of a DAG.
Cite
Text
Eberhardt et al. "On the Number of Experiments Sufficient and in the Worst Case Necessary to Identify All Causal Relations Among N Variables." Conference on Uncertainty in Artificial Intelligence, 2005.Markdown
[Eberhardt et al. "On the Number of Experiments Sufficient and in the Worst Case Necessary to Identify All Causal Relations Among N Variables." Conference on Uncertainty in Artificial Intelligence, 2005.](https://mlanthology.org/uai/2005/eberhardt2005uai-number/)BibTeX
@inproceedings{eberhardt2005uai-number,
title = {{On the Number of Experiments Sufficient and in the Worst Case Necessary to Identify All Causal Relations Among N Variables}},
author = {Eberhardt, Frederick and Glymour, Clark and Scheines, Richard},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2005},
pages = {178-184},
url = {https://mlanthology.org/uai/2005/eberhardt2005uai-number/}
}