Multiply-Robust Causal Change Attribution
Abstract
Comparing two samples of data, we observe a change in the distribution of an outcome variable. In the presence of multiple explanatory variables, how much of the change can be explained by each possible cause? We develop a new estimation strategy that, given a causal model, combines regression and re-weighting methods to quantify the contribution of each causal mechanism. Our proposed methodology is multiply robust, meaning that it still recovers the target parameter under partial misspecification. We prove that our estimator is consistent and asymptotically normal. Moreover, it can be incorporated into existing frameworks for causal attribution, such as Shapley values, which will inherit the consistency and large-sample distribution properties. Our method demonstrates excellent performance in Monte Carlo simulations, and we show its usefulness in an empirical application. Our method is implemented as part of the Python library “DoWhy“ (Sharma & Kiciman, 2020; Blöbaum et al., 2022).
Cite
Text
Quintas-Martinez et al. "Multiply-Robust Causal Change Attribution." International Conference on Machine Learning, 2024.Markdown
[Quintas-Martinez et al. "Multiply-Robust Causal Change Attribution." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/quintasmartinez2024icml-multiplyrobust/)BibTeX
@inproceedings{quintasmartinez2024icml-multiplyrobust,
title = {{Multiply-Robust Causal Change Attribution}},
author = {Quintas-Martinez, Victor and Bahadori, Mohammad Taha and Santiago, Eduardo and Mu, Jeff and Heckerman, David},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {41821-41840},
volume = {235},
url = {https://mlanthology.org/icml/2024/quintasmartinez2024icml-multiplyrobust/}
}