Learning Causally Emergent Representations
Abstract
Cognitive processes usually take place at a macroscopic scale in systems characterised by emergent properties, which make the whole `more than the sum of its parts.' While recent proposals have provided quantitative, information-theoretic metrics to detect emergence in time series data, it is often highly non-trivial to identify the relevant macroscopic variables a priori. In this paper we leverage recent advances in representation learning and differentiable information estimators to put forward a data-driven method to find emergent variables. The proposed method successfully detects emergent variables and recovers the ground-truth emergence values in a synthetic dataset. This proof-of-concept paves the ground for future analyses uncovering the emergent structure of cognitive representations in biological and artificial intelligence systems.
Cite
Text
Kaplanis et al. "Learning Causally Emergent Representations." NeurIPS 2023 Workshops: InfoCog, 2023.Markdown
[Kaplanis et al. "Learning Causally Emergent Representations." NeurIPS 2023 Workshops: InfoCog, 2023.](https://mlanthology.org/neuripsw/2023/kaplanis2023neuripsw-learning/)BibTeX
@inproceedings{kaplanis2023neuripsw-learning,
title = {{Learning Causally Emergent Representations}},
author = {Kaplanis, Christos and Mediano, Pedro and Rosas, Fernando},
booktitle = {NeurIPS 2023 Workshops: InfoCog},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/kaplanis2023neuripsw-learning/}
}