Feature-to-Feature Regression for a Two-Step Conditional Independence Test
Abstract
The algorithms for causal discovery and more broadly for learning the structure of graphical models require well calibrated and consistent conditional independence (CI) tests. We revisit the CI tests which are based on two-step procedures and involve regression with subsequent (unconditional) independence test (RESIT) on regression residuals and investigate the assumptions under which these tests operate. In particular, we demonstrate that when going beyond simple functional relationships with additive noise, such tests can lead to an inflated number of false discoveries. We study the relationship of these tests with those based on dependence measures using reproducing kernel Hilbert spaces (RKHS) and propose an extension of RESIT which uses RKHS-valued regression. The resulting test inherits the simple two-step testing procedure of RESIT, while giving correct Type I control and competitive power. When used as a component of the PC algorithm, the proposed test is more robust to the case where hidden variables induce a switching behaviour in the associations present in the data.
Cite
Text
Zhang et al. "Feature-to-Feature Regression for a Two-Step Conditional Independence Test." Conference on Uncertainty in Artificial Intelligence, 2017.Markdown
[Zhang et al. "Feature-to-Feature Regression for a Two-Step Conditional Independence Test." Conference on Uncertainty in Artificial Intelligence, 2017.](https://mlanthology.org/uai/2017/zhang2017uai-feature/)BibTeX
@inproceedings{zhang2017uai-feature,
title = {{Feature-to-Feature Regression for a Two-Step Conditional Independence Test}},
author = {Zhang, Qinyi and Filippi, Sarah and Flaxman, Seth R. and Sejdinovic, Dino},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2017},
url = {https://mlanthology.org/uai/2017/zhang2017uai-feature/}
}