Higher Order Transformers with Kronecker-Structured Attention
Abstract
Modern datasets are increasingly high-dimensional and multiway, often represented as tensor-valued data with multi-indexed variables. While Transformers excel in sequence modeling and high-dimensional tasks, their direct application to multiway data is computationally prohibitive due to the quadratic cost of dot-product attention and the need to flatten inputs, which disrupts tensor structure and cross-dimensional dependencies. We propose the Higher-Order Transformer (HOT), a novel factorized attention framework that represents multiway attention as sums of Kronecker products or sums of mode-wise attention matrices. HOT efficiently captures dense and sparse relationships across dimensions while preserving tensor structure. Theoretically, HOT retains the expressiveness of full high-order attention and allows complexity control via factorization rank. Experiments on 2D and 3D datasets show that HOT achieves competitive performance in multivariate time series forecasting and image classification, with significantly reduced computational and memory costs. Visualizations of mode-wise attention matrices further reveal interpretable high-order dependencies learned by HOT, demonstrating its versatility for complex multiway data across diverse domains.
Cite
Text
Omranpour et al. "Higher Order Transformers with Kronecker-Structured Attention." Transactions on Machine Learning Research, 2025.Markdown
[Omranpour et al. "Higher Order Transformers with Kronecker-Structured Attention." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/omranpour2025tmlr-higher/)BibTeX
@article{omranpour2025tmlr-higher,
title = {{Higher Order Transformers with Kronecker-Structured Attention}},
author = {Omranpour, Soroush and Rabusseau, Guillaume and Rabbany, Reihaneh},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/omranpour2025tmlr-higher/}
}