Detecting Hidden Confounding in Observational Data Using Multiple Environments
Abstract
A common assumption in causal inference from observational data is that there is no hidden confounding. Yet it is, in general, impossible to verify the presence of hidden confounding factors from a single dataset. Under the assumption of independent causal mechanisms underlying the data-generating process, we demonstrate a way to detect unobserved confounders when having multiple observational datasets coming from different environments. We present a theory for testable conditional independencies that are only absent when there is hidden confounding and examine cases where we violate its assumptions: degenerate & dependent mechanisms, and faithfulness violations. Additionally, we propose a procedure to test these independencies and study its empirical finite-sample behavior using simulation studies and semi-synthetic data based on a real-world dataset. In most cases, the proposed procedure correctly predicts the presence of hidden confounding, particularly when the confounding bias is large.
Cite
Text
Karlsson and Krijthe. "Detecting Hidden Confounding in Observational Data Using Multiple Environments." Neural Information Processing Systems, 2023.Markdown
[Karlsson and Krijthe. "Detecting Hidden Confounding in Observational Data Using Multiple Environments." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/karlsson2023neurips-detecting/)BibTeX
@inproceedings{karlsson2023neurips-detecting,
title = {{Detecting Hidden Confounding in Observational Data Using Multiple Environments}},
author = {Karlsson, Rickard and Krijthe, Jesse},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/karlsson2023neurips-detecting/}
}