Structured Linear CDEs: Maximally Expressive and Parallel-in-Time Sequence Models

Abstract

This work introduces Structured Linear Controlled Differential Equations (SLiCEs), a unifying framework for sequence models with structured, input-dependent state-transition matrices that retain the maximal expressivity of dense matrices whilst being cheaper to compute. The framework encompasses existing architectures, such as input-dependent block-diagonal linear recurrent neural networks and DeltaNet's diagonal-plus-low-rank structure, as well as two novel variants based on sparsity and the Walsh-Hadamard transform. We prove that, unlike the diagonal state-transition matrices of S4D and Mamba, SLiCEs employing block-diagonal, sparse, or Walsh-Hadamard matrices match the maximal expressivity of dense matrices. Empirically, SLiCEs solve the $A_5$ state-tracking benchmark with a single layer, achieve best-in-class length generalisation on regular language tasks among parallel-in-time models, and match the performance of log neural controlled differential equations on six multivariate time-series classification datasets while cutting the average time per training step by a factor of twenty.

Cite

Text

Walker et al. "Structured Linear CDEs: Maximally Expressive and Parallel-in-Time Sequence Models." Advances in Neural Information Processing Systems, 2025.

Markdown

[Walker et al. "Structured Linear CDEs: Maximally Expressive and Parallel-in-Time Sequence Models." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/walker2025neurips-structured/)

BibTeX

@inproceedings{walker2025neurips-structured,
  title     = {{Structured Linear CDEs: Maximally Expressive and Parallel-in-Time Sequence Models}},
  author    = {Walker, Benjamin and Yang, Lingyi and Cirone, Nicola Muca and Salvi, Cristopher and Lyons, Terry},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/walker2025neurips-structured/}
}