Relational Composition in Neural Networks: A Survey and Call to Action
Abstract
Many neural nets appear to represent data as linear combinations of ``feature vectors.'' Algorithms for discovering these vectors have seen impressive recent success. However, we argue that this success is incomplete without an understanding of relational composition: how (or whether) neural nets combine feature vectors to represent more complicated relationships. To facilitate research in this area, this paper offers a guided tour of various relational mechanisms that have been proposed, along with preliminary analysis of how such mechanisms might affect the search for interpretable features. We end with a series of promising areas for empirical research, which may help determine how neural networks represent structured data.
Cite
Text
Wattenberg and Viégas. "Relational Composition in Neural Networks: A Survey and Call to Action." ICML 2024 Workshops: MI, 2024.Markdown
[Wattenberg and Viégas. "Relational Composition in Neural Networks: A Survey and Call to Action." ICML 2024 Workshops: MI, 2024.](https://mlanthology.org/icmlw/2024/wattenberg2024icmlw-relational/)BibTeX
@inproceedings{wattenberg2024icmlw-relational,
title = {{Relational Composition in Neural Networks: A Survey and Call to Action}},
author = {Wattenberg, Martin and Viégas, Fernanda},
booktitle = {ICML 2024 Workshops: MI},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/wattenberg2024icmlw-relational/}
}