HopCast: Calibration of Autoregressive Dynamics Models
Abstract
Deep learning models are often trained to approximate dynamical systems that can be modeled using differential equations. Many of these models are optimized to predict one step ahead; such approaches produce calibrated one-step predictions if the predictive model can quantify uncertainty, such as Deep Ensembles. At inference time, multi-step predictions are generated via autoregression, which needs a sound uncertainty propagation method to produce calibrated multi-step predictions. This work introduces an alternative Predictor-Corrector approach named HopCast that uses Modern Hopfield Networks (MHN) to learn the errors of a deterministic Predictor that approximates the dynamical system. The Corrector predicts a set of errors for the Predictor's output based on a context state at any timestep during autoregression. The set of errors creates sharper and well-calibrated prediction intervals with higher predictive accuracy compared to baselines without uncertainty propagation. The calibration and prediction performances are evaluated across a set of dynamical systems. This work is also the first to benchmark existing uncertainty propagation methods based on calibration errors.
Cite
Text
Shahid and Fleming. "HopCast: Calibration of Autoregressive Dynamics Models." Transactions on Machine Learning Research, 2025.Markdown
[Shahid and Fleming. "HopCast: Calibration of Autoregressive Dynamics Models." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/shahid2025tmlr-hopcast/)BibTeX
@article{shahid2025tmlr-hopcast,
title = {{HopCast: Calibration of Autoregressive Dynamics Models}},
author = {Shahid, Muhammad Bilal and Fleming, Cody},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/shahid2025tmlr-hopcast/}
}