ChordMixer: A Scalable Neural Attention Model for Sequences with Different Length

Abstract

Sequential data naturally have different lengths in many domains, with some very long sequences. As an important modeling tool, neural attention should capture long-range interaction in such sequences. However, most existing neural attention models admit only short sequences, or they have to employ chunking or padding to enforce a constant input length. Here we propose a simple neural network building block called ChordMixer which can model the attention for long sequences with variable lengths. Each ChordMixer block consists of a position-wise rotation layer without learnable parameters and an element-wise MLP layer. Repeatedly applying such blocks forms an effective network backbone that mixes the input signals towards the learning targets. We have tested ChordMixer on the synthetic adding problem, long document classification, and DNA sequence-based taxonomy classification. The experiment results show that our method substantially outperforms other neural attention models.

Cite

Text

Khalitov et al. "ChordMixer: A Scalable Neural Attention Model for Sequences with Different Length." International Conference on Learning Representations, 2023.

Markdown

[Khalitov et al. "ChordMixer: A Scalable Neural Attention Model for Sequences with Different Length." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/khalitov2023iclr-chordmixer/)

BibTeX

@inproceedings{khalitov2023iclr-chordmixer,
  title     = {{ChordMixer: A Scalable Neural Attention Model for Sequences with Different Length}},
  author    = {Khalitov, Ruslan and Yu, Tong and Cheng, Lei and Yang, Zhirong},
  booktitle = {International Conference on Learning Representations},
  year      = {2023},
  url       = {https://mlanthology.org/iclr/2023/khalitov2023iclr-chordmixer/}
}