Learning the Structure of Linear Latent Variable Models
Abstract
We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are d-separated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the procedure is point-wise consistent assuming (a) the causal relations can be represented by a directed acyclic graph (DAG) satisfying the Markov Assumption and the Faithfulness Assumption; (b) unrecorded variables are not caused by recorded variables; and (c) dependencies are linear. We compare the procedure with standard approaches over a variety of simulated structures and sample sizes, and illustrate its practical value with brief studies of social science data sets. Finally, we consider generalizations for non-linear systems.
Cite
Text
Silva et al. "Learning the Structure of Linear Latent Variable Models." Journal of Machine Learning Research, 2006.Markdown
[Silva et al. "Learning the Structure of Linear Latent Variable Models." Journal of Machine Learning Research, 2006.](https://mlanthology.org/jmlr/2006/silva2006jmlr-learning/)BibTeX
@article{silva2006jmlr-learning,
title = {{Learning the Structure of Linear Latent Variable Models}},
author = {Silva, Ricardo and Scheine, Richard and Glymour, Clark and Spirtes, Peter},
journal = {Journal of Machine Learning Research},
year = {2006},
pages = {191-246},
volume = {7},
url = {https://mlanthology.org/jmlr/2006/silva2006jmlr-learning/}
}