Linearized Relative Positional Encoding
Abstract
Relative positional encoding is widely used in vanilla and linear transformers to represent positional information. However, existing encoding methods of a vanilla transformer are not always directly applicable to a linear transformer, because the latter requires a decomposition of the query and key representations into separate kernel functions. Nevertheless, principles for designing encoding methods suitable for linear transformers remain understudied. In this work, we put together a variety of existing linear relative positional encoding approaches under a canonical form and further propose a family of linear relative positional encoding algorithms via unitary transformation. Our formulation leads to a principled framework that can be used to develop new relative positional encoding methods that preserve linear space-time complexity. Equipped with different models, the proposed linearized relative positional encoding (LRPE) family derives effective encoding for various applications. Experiments show that compared with existing methods, LRPE achieves state-of-the-art performance in language modeling, text classification, and image classification. Meanwhile, it emphasizes a general paradigm for designing broadly more relative positional encoding methods that are applicable to linear transformers.
Cite
Text
Qin et al. "Linearized Relative Positional Encoding." Transactions on Machine Learning Research, 2023.Markdown
[Qin et al. "Linearized Relative Positional Encoding." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/qin2023tmlr-linearized/)BibTeX
@article{qin2023tmlr-linearized,
title = {{Linearized Relative Positional Encoding}},
author = {Qin, Zhen and Sun, Weixuan and Lu, Kaiyue and Deng, Hui and Li, Dongxu and Han, Xiaodong and Dai, Yuchao and Kong, Lingpeng and Zhong, Yiran},
journal = {Transactions on Machine Learning Research},
year = {2023},
url = {https://mlanthology.org/tmlr/2023/qin2023tmlr-linearized/}
}