Representation Learning: A Causal Perspective
Abstract
Representation learning constructs low-dimensional representations to summarize essential features of high-dimensional data. This learning problem is often approached by describing various desiderata associated with learned representations; e.g., that they be non-spurious, efficient, or disentangled. It can be challenging, however, to turn these intuitive desiderata into formal criteria that can be measured and enhanced based on observed data. In this paper, we take a causal perspective on representation learning, formalizing desiderata like non-spuriousness and demonstrating their practical utility.
Cite
Text
Wang. "Representation Learning: A Causal Perspective." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I27.35124Markdown
[Wang. "Representation Learning: A Causal Perspective." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/wang2025aaai-representation/) doi:10.1609/AAAI.V39I27.35124BibTeX
@inproceedings{wang2025aaai-representation,
title = {{Representation Learning: A Causal Perspective}},
author = {Wang, Yixin},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {28731},
doi = {10.1609/AAAI.V39I27.35124},
url = {https://mlanthology.org/aaai/2025/wang2025aaai-representation/}
}