On the Role of Noise in Factorizers for Disentangling Distributed Representations

Abstract

One can exploit the compute-in-superposition capabilities of vector-symbolic architectures (VSA) to efficiently factorize high-dimensional distributed representations to the constituent atomic vectors. Such factorizers however suffer from the phenomenon of limit cycles. Applying noise during the iterative decoding is one mechanism to address this issue. In this paper, we explore ways to further relax the noise requirement by applying noise only at the time of VSA's reconstruction codebook initialization. While the need for noise during iterations proves analog in-memory computing systems to be a natural choice as an implementation media, the adequacy of initialization noise allows digital hardware to remain equally indispensable. This broadens the implementation possibilities of factorizers. Our study finds that while the best performance shifts from initialization noise to iterative noise as the number of factors increases from 2 to 4, both extend the operational capacity by at least $50\times$ compared to the baseline factorizer resonator networks. Our code is available at: https://github.com/IBM/in-memory-factorizer

Cite

Text

Karunaratne et al. "On the Role of Noise in Factorizers for Disentangling Distributed Representations." NeurIPS 2024 Workshops: MLNCP, 2024.

Markdown

[Karunaratne et al. "On the Role of Noise in Factorizers for Disentangling Distributed Representations." NeurIPS 2024 Workshops: MLNCP, 2024.](https://mlanthology.org/neuripsw/2024/karunaratne2024neuripsw-role/)

BibTeX

@inproceedings{karunaratne2024neuripsw-role,
  title     = {{On the Role of Noise in Factorizers for Disentangling Distributed Representations}},
  author    = {Karunaratne, Geethan and Hersche, Michael and Sebastian, Abu and Rahimi, Abbas},
  booktitle = {NeurIPS 2024 Workshops: MLNCP},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/karunaratne2024neuripsw-role/}
}