Unsupervised Causal Abstraction
Abstract
Causal abstraction aims at mapping a complex causal model into a simpler ("reduced") one. Causal consistency constraints have been established to link the initial "low-level" model to its "high-level" counterpart, and identifiability results for such mapping can be established when we have access to some information about high-level variables. In contrast, we study the problem of learning a causal abstraction in an *unsupervised* manner, that is, when we do not have any information on the high-level causal model. In such setting, there typically exists multiple causally consistent abstractions, and we need to put additional constraints to unambiguously select a high-level model. To achieve this, we supplement a Kullback-Leibler-divergence-based consistency loss with a projection loss, which aims at finding the causal abstraction that best captures the variations of the low-level variables, thereby eliminating trivial solutions. The projection loss bears similarity to the Principal Component Analysis (PCA) algorithm; in this work it is shown to have a causal interpretation. We experimentally show how the abstraction preferred by the reconstruction loss varies with respect to the causal coefficients.
Cite
Text
Zhu et al. "Unsupervised Causal Abstraction." NeurIPS 2024 Workshops: CRL, 2024.Markdown
[Zhu et al. "Unsupervised Causal Abstraction." NeurIPS 2024 Workshops: CRL, 2024.](https://mlanthology.org/neuripsw/2024/zhu2024neuripsw-unsupervised/)BibTeX
@inproceedings{zhu2024neuripsw-unsupervised,
title = {{Unsupervised Causal Abstraction}},
author = {Zhu, Yuchen and Mejia, Sergio Hernan Garrido and Schölkopf, Bernhard and Besserve, Michel},
booktitle = {NeurIPS 2024 Workshops: CRL},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/zhu2024neuripsw-unsupervised/}
}