A Walsh Hadamard Derived Linear Vector Symbolic Architecture

Abstract

Vector Symbolic Architectures (VSAs) are one approach to developing Neuro-symbolic AI, where two vectors in $\mathbb{R}^d$ are 'bound' together to produce a new vector in the same space. VSAs support the commutativity and associativity of this binding operation, along with an inverse operation, allowing one to construct symbolic-style manipulations over real-valued vectors. Most VSAs were developed before deep learning and automatic differentiation became popular and instead focused on efficacy in hand-designed systems. In this work, we introduce the Hadamard-derived linear Binding (HLB), which is designed to have favorable computational efficiency, and efficacy in classic VSA tasks, and perform well in differentiable systems.

Cite

Text

Alam et al. "A Walsh Hadamard Derived Linear Vector Symbolic Architecture." Neural Information Processing Systems, 2024. doi:10.52202/079017-0089

Markdown

[Alam et al. "A Walsh Hadamard Derived Linear Vector Symbolic Architecture." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/alam2024neurips-walsh/) doi:10.52202/079017-0089

BibTeX

@inproceedings{alam2024neurips-walsh,
  title     = {{A Walsh Hadamard Derived Linear Vector Symbolic Architecture}},
  author    = {Alam, Mohammad Mahmudul and Oberle, Alexander and Raff, Edward and Biderman, Stella and Oates, Tim and Holt, James},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-0089},
  url       = {https://mlanthology.org/neurips/2024/alam2024neurips-walsh/}
}