Diffusion Based Causal Representation Learning

Abstract

Causal reasoning can be considered a cornerstone of intelligent systems. Having access to an underlying causal graph comes with the promise of cause-effect estimation and the identification of efficient and safe interventions. However, depending on the application and the complexity of the system one causal graph might be insufficient and even the variables of interest and levels of abstractions might change. This is incompatible with currently deployed generative models including popular VAE approaches which provide only representations from a point estimate. In this work, we study recently introduced diffusion-based representations which offer access to infinite dimensional latent codes which encode different levels of information in the latent code. In a first proof of principle, we investigate the use of a single point of these infinite dimensional codes for causal representation learning and demonstrate experimentally that this approach performs comparably well in identifying the causal structure and causal variables.

Cite

Text

Mamaghan et al. "Diffusion Based Causal Representation Learning." ICML 2023 Workshops: SPIGM, 2023.

Markdown

[Mamaghan et al. "Diffusion Based Causal Representation Learning." ICML 2023 Workshops: SPIGM, 2023.](https://mlanthology.org/icmlw/2023/mamaghan2023icmlw-diffusion/)

BibTeX

@inproceedings{mamaghan2023icmlw-diffusion,
  title     = {{Diffusion Based Causal Representation Learning}},
  author    = {Mamaghan, Amir Mohammad Karimi and Dittadi, Andrea and Bauer, Stefan and Quinzan, Francesco},
  booktitle = {ICML 2023 Workshops: SPIGM},
  year      = {2023},
  url       = {https://mlanthology.org/icmlw/2023/mamaghan2023icmlw-diffusion/}
}