Lossy Compression and the Granularity of Causal Representation

Abstract

A given causal system can be represented in a variety of ways. How do agents determine which variables to include in their causal representations, and at what level of granularity? Using techniques from information theory, we develop a formal theory according to which causal representations reflect a trade-off between compression and informativeness. We then show, across three studies (N=1,391), that participants’ choices over causal models demonstrate a preference for more compressed causal models when all other factors are held fixed, with some further tolerance for lossy compressions.

Cite

Text

Kinney and Lombrozo. "Lossy Compression and the Granularity of Causal Representation." NeurIPS 2023 Workshops: InfoCog, 2023.

Markdown

[Kinney and Lombrozo. "Lossy Compression and the Granularity of Causal Representation." NeurIPS 2023 Workshops: InfoCog, 2023.](https://mlanthology.org/neuripsw/2023/kinney2023neuripsw-lossy/)

BibTeX

@inproceedings{kinney2023neuripsw-lossy,
  title     = {{Lossy Compression and the Granularity of Causal Representation}},
  author    = {Kinney, David and Lombrozo, Tania},
  booktitle = {NeurIPS 2023 Workshops: InfoCog},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/kinney2023neuripsw-lossy/}
}