Algebraic Positional Encodings

Abstract

We introduce a novel positional encoding strategy for Transformer-style models, addressing the shortcomings of existing, often ad hoc, approaches. Our framework implements a flexible mapping from the algebraic specification of a domain to a positional encoding scheme where positions are interpreted as orthogonal operators. This design preserves the structural properties of the source domain, thereby ensuring that the end-model upholds them. The framework can accommodate various structures, including sequences, grids and trees, but also their compositions. We conduct a series of experiments demonstrating the practical applicability of our method. Our results suggest performance on par with or surpassing the current state of the art, without hyper-parameter optimizations or ``task search'' of any kind.Code is available through https://aalto-quml.github.io/ape/.

Cite

Text

Kogkalidis et al. "Algebraic Positional Encodings." Neural Information Processing Systems, 2024. doi:10.52202/079017-1099

Markdown

[Kogkalidis et al. "Algebraic Positional Encodings." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/kogkalidis2024neurips-algebraic/) doi:10.52202/079017-1099

BibTeX

@inproceedings{kogkalidis2024neurips-algebraic,
  title     = {{Algebraic Positional Encodings}},
  author    = {Kogkalidis, Konstantinos and Bernardy, Jean-Philippe and Garg, Vikas},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-1099},
  url       = {https://mlanthology.org/neurips/2024/kogkalidis2024neurips-algebraic/}
}