A Decoder-Only Foundation Model for Time-Series Forecasting

Abstract

Motivated by recent advances in large language models for Natural Language Processing (NLP), we design a time-series foundation model for forecasting whose out-of-the-box zero-shot performance on a variety of public datasets comes close to the accuracy of state-of-the-art supervised forecasting models for each individual dataset. Our model is based on pretraining a decoder style attention model with input patching, using a large time-series corpus comprising both real-world and synthetic datasets. Experiments on a diverse set of previously unseen forecasting datasets suggests that the model can yield accurate zero-shot forecasts across different domains, forecasting horizons and temporal granularities.

Cite

Text

Das et al. "A Decoder-Only Foundation Model for Time-Series Forecasting." International Conference on Machine Learning, 2024.

Markdown

[Das et al. "A Decoder-Only Foundation Model for Time-Series Forecasting." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/das2024icml-decoderonly/)

BibTeX

@inproceedings{das2024icml-decoderonly,
  title     = {{A Decoder-Only Foundation Model for Time-Series Forecasting}},
  author    = {Das, Abhimanyu and Kong, Weihao and Sen, Rajat and Zhou, Yichen},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {10148-10167},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/das2024icml-decoderonly/}
}