Towards Compositionality in Concept Learning

Abstract

Concept-based interpretability methods offer a lens into the internals of foundation models by decomposing their embeddings into high-level concepts. These concept representations are most useful when they are compositional, meaning that the individual concepts compose to explain the full sample. We show that existing unsupervised concept extraction methods find concepts which are not compositional. To automatically discover compositional concept representations, we identify two salient properties of such representations, and propose Compositional Concept Extraction (CCE) for finding concepts which obey these properties. We evaluate CCE on five different datasets over image and text data. Our evaluation shows that CCE finds more compositional concept representations than baselines and yields better accuracy on four downstream classification tasks.

Cite

Text

Stein et al. "Towards Compositionality in Concept Learning." International Conference on Machine Learning, 2024.

Markdown

[Stein et al. "Towards Compositionality in Concept Learning." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/stein2024icml-compositionality/)

BibTeX

@inproceedings{stein2024icml-compositionality,
  title     = {{Towards Compositionality in Concept Learning}},
  author    = {Stein, Adam and Naik, Aaditya and Wu, Yinjun and Naik, Mayur and Wong, Eric},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {46530-46555},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/stein2024icml-compositionality/}
}