Temporal Horizons in Forecasting: A Performance-Learnability Trade-Off
Abstract
When training autoregressive models to forecast dynamical systems, a critical question arises: how far into the future should the model be trained to predict for optimal performance? In this work, we address this question by analyzing the relationship between the geometry of the loss landscape and the training time horizon. Using dynamical systems theory, we prove that loss minima for long horizons generalize well to short-term forecasts, whereas minima found on short horizons result in worse long-term predictions. However, we also prove that the loss landscape becomes rougher as the training horizon grows, making long-horizon training inherently challenging. We validate our theory through numerical experiments and discuss practical implications for selecting training horizons. Our results provide a principled foundation for hyperparameter optimization in autoregressive forecasting models.
Cite
Text
Aceituno et al. "Temporal Horizons in Forecasting: A Performance-Learnability Trade-Off." Transactions on Machine Learning Research, 2025.Markdown
[Aceituno et al. "Temporal Horizons in Forecasting: A Performance-Learnability Trade-Off." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/aceituno2025tmlr-temporal/)BibTeX
@article{aceituno2025tmlr-temporal,
title = {{Temporal Horizons in Forecasting: A Performance-Learnability Trade-Off}},
author = {Aceituno, Pau Vilimelis and Miller, Jack William and Marti, Noah and Farag, Youssef and Boussange, Victor},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/aceituno2025tmlr-temporal/}
}