Counterfactual Robustness: A Framework to Analyze the Robustness of Causal Generative Models Across Interventions

Abstract

Data generation using generative models is one of the most impressive growing field of artificial intelligence. However, such models are black boxes trained on huge datasets lacking interpretability properties. Causality is a natural framework to include expert knowledge into deep generative models. Other expected beneficial properties of causal generative models are fairness, transparency and robustness of the generation process. Up to our best knowledge, while many works have analyzed general generative models’ robustness, surprisingly none have focused on their causal counterpart even if their robustness is a common claim. In the present paper, we introduce the fundamental concept of counterfactual robustness, which evaluates how sensitive causal generative models are to interventions with respect to distribution shifts. Through a series of experiments on synthetic and real-life datasets, we demonstrate that all the studied causal generative models are not equal with respect to counterfactual robustness. More surprisingly, we show that all causal interventions are also not equally robust. We provide a simple explanation based on the causal mechanisms between the variables, that is theoretically grounded in the case of an extended CausalVAE. Our in-depth analysis also yields an efficient way to identify the most robust intervention based on prior knowledge on the causal graph.

Cite

Text

Benhamza et al. "Counterfactual Robustness: A Framework to Analyze the Robustness of Causal Generative Models Across Interventions." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2025. doi:10.1007/978-3-032-05962-8_23

Markdown

[Benhamza et al. "Counterfactual Robustness: A Framework to Analyze the Robustness of Causal Generative Models Across Interventions." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2025.](https://mlanthology.org/ecmlpkdd/2025/benhamza2025ecmlpkdd-counterfactual/) doi:10.1007/978-3-032-05962-8_23

BibTeX

@inproceedings{benhamza2025ecmlpkdd-counterfactual,
  title     = {{Counterfactual Robustness: A Framework to Analyze the Robustness of Causal Generative Models Across Interventions}},
  author    = {Benhamza, Manal and Clausel, Marianne and Tami, Myriam},
  booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
  year      = {2025},
  pages     = {391-408},
  doi       = {10.1007/978-3-032-05962-8_23},
  url       = {https://mlanthology.org/ecmlpkdd/2025/benhamza2025ecmlpkdd-counterfactual/}
}