From Block-Toeplitz Matrices to Differential Equations on Graphs: Towards a General Theory for Scalable Masked Transformers
Abstract
In this paper we provide, to the best of our knowledge, the first comprehensive approach for incorporating various masking mechanisms into Transformers architectures in a scalable way. We show that recent results on linear causal attention (Choromanski et al., 2021) and log-linear RPE-attention (Luo et al., 2021) are special cases of this general mechanism. However by casting the problem as a topological (graph-based) modulation of unmasked attention, we obtain several results unknown before, including efficient d-dimensional RPE-masking and graph-kernel masking. We leverage many mathematical techniques ranging from spectral analysis through dynamic programming and random walks to new algorithms for solving Markov processes on graphs. We provide a corresponding empirical evaluation.
Cite
Text
Choromanski et al. "From Block-Toeplitz Matrices to Differential Equations on Graphs: Towards a General Theory for Scalable Masked Transformers." International Conference on Machine Learning, 2022.Markdown
[Choromanski et al. "From Block-Toeplitz Matrices to Differential Equations on Graphs: Towards a General Theory for Scalable Masked Transformers." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/choromanski2022icml-blocktoeplitz/)BibTeX
@inproceedings{choromanski2022icml-blocktoeplitz,
title = {{From Block-Toeplitz Matrices to Differential Equations on Graphs: Towards a General Theory for Scalable Masked Transformers}},
author = {Choromanski, Krzysztof and Lin, Han and Chen, Haoxian and Zhang, Tianyi and Sehanobish, Arijit and Likhosherstov, Valerii and Parker-Holder, Jack and Sarlos, Tamas and Weller, Adrian and Weingarten, Thomas},
booktitle = {International Conference on Machine Learning},
year = {2022},
pages = {3962-3983},
volume = {162},
url = {https://mlanthology.org/icml/2022/choromanski2022icml-blocktoeplitz/}
}