Causal Representation Learning and Inference via Mixture-Based Priors

Abstract

Causal Representation Learning (CRL) aims to uncover causal symmetries in the data-generating process with minimal assumptions and data requirements. The challenge lies in identifying the causal factors and learning their relationships, which is an inherently ill-posed problem. Ensuring unique solutions, known as \emph{identifiability}, is crucial but often requires strong assumptions or acccess to interventional or counterfactual data. In this work, we propose a novel approach that partitions the latent space: one component captures causal factors using diffeomorphic flows to model causal mechanisms, while the other accounts for exogenous noise. This structured decomposition enables our model to scale effectively to high-dimensional data and deep architectures. We establish theoretical guarantees for CRL by proving the identifiability of both causal factors and exogenous noise. Empirical results across multiple datasets validate our theoretical findings.

Cite

Text

Kori et al. "Causal Representation Learning and Inference via Mixture-Based Priors." ICLR 2025 Workshops: DeLTa, 2025.

Markdown

[Kori et al. "Causal Representation Learning and Inference via Mixture-Based Priors." ICLR 2025 Workshops: DeLTa, 2025.](https://mlanthology.org/iclrw/2025/kori2025iclrw-causal/)

BibTeX

@inproceedings{kori2025iclrw-causal,
  title     = {{Causal Representation Learning and Inference via Mixture-Based Priors}},
  author    = {Kori, Avinash and Balsells-Rodas, Carles and Glocker, Ben and Li, Yingzhen and Locatello, Francesco},
  booktitle = {ICLR 2025 Workshops: DeLTa},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/kori2025iclrw-causal/}
}