Enhancing Concept-Based Learning with Logic

Abstract

Concept-based models promote learning in terms of high-level transferrable abstractions. These models offer one additional level of transparency compared to a black box model, as the predictions are a weighted combination of concepts. The relations between concepts are a rich source of information that would compliment learning. We propose using the propositional logic derived from the concepts to model these relations and to address the expressivity-vs-interpretability tradeoff in these models. Three architectural variants that give rise to logic-enhanced models are introduced. We analyse several ways of training them and experimentally show that logic-enhanced concept-based models give better concept alignment and interpretability, while not loosing out on performance. These models allow for a richer formal expression of predictions, paving the way for logical reasoning with symbolic concepts.

Cite

Text

Vemuri et al. "Enhancing Concept-Based Learning with Logic." ICML 2024 Workshops: Differentiable_Almost_Everything, 2024.

Markdown

[Vemuri et al. "Enhancing Concept-Based Learning with Logic." ICML 2024 Workshops: Differentiable_Almost_Everything, 2024.](https://mlanthology.org/icmlw/2024/vemuri2024icmlw-enhancing/)

BibTeX

@inproceedings{vemuri2024icmlw-enhancing,
  title     = {{Enhancing Concept-Based Learning with Logic}},
  author    = {Vemuri, Deepika and Bellamkonda, Gautham and Balasubramanian, Vineeth N.},
  booktitle = {ICML 2024 Workshops: Differentiable_Almost_Everything},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/vemuri2024icmlw-enhancing/}
}