Neural Causal Abstractions

Abstract

The ability of humans to understand the world in terms of cause and effect relationships, as well as their ability to compress information into abstract concepts, are two hallmark features of human intelligence. These two topics have been studied in tandem under the theory of causal abstractions, but it is an open problem how to best leverage abstraction theory in real-world causal inference tasks, where the true model is not known, and limited data is available in most practical settings. In this paper, we focus on a family of causal abstractions constructed by clustering variables and their domains, redefining abstractions to be amenable to individual causal distributions. We show that such abstractions can be learned in practice using Neural Causal Models, allowing us to utilize the deep learning toolkit to solve causal tasks (identification, estimation, sampling) at different levels of abstraction granularity. Finally, we show how representation learning can be used to learn abstractions, which we apply in our experiments to scale causal inferences to high dimensional settings such as with image data.

Cite

Text

Xia and Bareinboim. "Neural Causal Abstractions." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I18.30044

Markdown

[Xia and Bareinboim. "Neural Causal Abstractions." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/xia2024aaai-neural/) doi:10.1609/AAAI.V38I18.30044

BibTeX

@inproceedings{xia2024aaai-neural,
  title     = {{Neural Causal Abstractions}},
  author    = {Xia, Kevin and Bareinboim, Elias},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {20585-20595},
  doi       = {10.1609/AAAI.V38I18.30044},
  url       = {https://mlanthology.org/aaai/2024/xia2024aaai-neural/}
}