Relative Geometry of Neural Forecasters: Linking Accuracy and Alignment in Learned Latent Geometry

Abstract

Neural networks can accurately forecast complex dynamical systems, yet how they internally represent underlying latent geometry remains poorly understood. We study neural forecasters through the lens of representational alignment, introducing anchor-based, geometry-agnostic relative embeddings that remove rotational and scaling ambiguities in latent spaces. Applying this framework across seven canonical dynamical systems—ranging from periodic to chaotic—we reveal reproducible family-level structure: multilayer perceptrons align with other MLPs, recurrent networks with RNNs, while transformers and echo-state networks achieve strong forecasts despite weaker alignment. Alignment generally correlates with forecasting accuracy, yet high accuracy can coexist with low alignment. Relative geometry thus provides a simple, reproducible foundation for comparing how model families internalize and represent dynamical structure.

Cite

Text

Kucukahmetler et al. "Relative Geometry of Neural Forecasters: Linking Accuracy and Alignment in Learned Latent Geometry." Transactions on Machine Learning Research, 2026.

Markdown

[Kucukahmetler et al. "Relative Geometry of Neural Forecasters: Linking Accuracy and Alignment in Learned Latent Geometry." Transactions on Machine Learning Research, 2026.](https://mlanthology.org/tmlr/2026/kucukahmetler2026tmlr-relative/)

BibTeX

@article{kucukahmetler2026tmlr-relative,
  title     = {{Relative Geometry of Neural Forecasters: Linking Accuracy and Alignment in Learned Latent Geometry}},
  author    = {Kucukahmetler, Deniz and Hemmann, Maximilian Jean and von Aehrenfeld, Julian Mosig and Amthor, Maximilian and Deubel, Christian and Scherf, Nico and Taha, Diaaeldin},
  journal   = {Transactions on Machine Learning Research},
  year      = {2026},
  url       = {https://mlanthology.org/tmlr/2026/kucukahmetler2026tmlr-relative/}
}