Testable Implications of Linear Structural Equation Models

Abstract

In causal inference, all methods of model learning rely on testable implications, namely, properties of the joint distribution that are dictated by the model structure. These constraints, if not satisfied in the data, allow us to reject or modify the model. Most common methods of testing a linear structural equation model (SEM) rely on the likelihood ratio or chi-square test which simultaneously tests all of the restrictions implied by the model. Local constraints, on the other hand, offer increased power (Bollen and Pearl, 2013; McDonald, 2002) and, in the case of failure, provide the modeler with insight for revising the model specification. One strategy of uncovering local constraints in linear SEMs is to search for overidentified path coefficients. While these overidentifying constraints are well known, no method has been given for systematically discovering them. In this paper, we extend the half-trek criterion of (Foygel et al., 2012) to identify a larger set of structural coefficients and use it to systematically discover overidentifying constraints. Still open is the question of whether our algorithm is complete.

Cite

Text

Chen et al. "Testable Implications of Linear Structural Equation Models." AAAI Conference on Artificial Intelligence, 2014. doi:10.1609/AAAI.V28I1.9065

Markdown

[Chen et al. "Testable Implications of Linear Structural Equation Models." AAAI Conference on Artificial Intelligence, 2014.](https://mlanthology.org/aaai/2014/chen2014aaai-testable/) doi:10.1609/AAAI.V28I1.9065

BibTeX

@inproceedings{chen2014aaai-testable,
  title     = {{Testable Implications of Linear Structural Equation Models}},
  author    = {Chen, Bryant and Tian, Jin and Pearl, Judea},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2014},
  pages     = {2424-2430},
  doi       = {10.1609/AAAI.V28I1.9065},
  url       = {https://mlanthology.org/aaai/2014/chen2014aaai-testable/}
}