Approximate Causal Abstractions
Abstract
Scientific models describe natural phenomena at different levels of abstraction. Abstract descriptions can provide the basis for interventions on the system and explanation of observed phenomena at a level of granularity that is coarser than the most fundamental account of the system. Beckers and Halpern (2019), building on prior work of Rubinstein et al. (2017), developed an account of abstraction for causal models that is exact. Here we extend this account to the more realistic case where an abstract causal model only offers an approximation of the underlying system. We show how the resulting account handles the discrepancy that can arise between low- and high-level causal models of the same system, and in the process provide an account of how one causal model approximates another, a topic of independent interest. Finally, we extend the account of approximate abstractions to probabilistic causal models, indicating how and where uncertainty can enter into an approximate abstraction.
Cite
Text
Beckers et al. "Approximate Causal Abstractions." Uncertainty in Artificial Intelligence, 2019.Markdown
[Beckers et al. "Approximate Causal Abstractions." Uncertainty in Artificial Intelligence, 2019.](https://mlanthology.org/uai/2019/beckers2019uai-approximate/)BibTeX
@inproceedings{beckers2019uai-approximate,
title = {{Approximate Causal Abstractions}},
author = {Beckers, Sander and Eberhardt, Frederick and Halpern, Joseph Y.},
booktitle = {Uncertainty in Artificial Intelligence},
year = {2019},
pages = {606-615},
volume = {115},
url = {https://mlanthology.org/uai/2019/beckers2019uai-approximate/}
}