Multi-Resolution Time-Series Transformer for Long-Term Forecasting

Abstract

The performance of transformers for time-series forecasting has improved significantly. Recent architectures learn complex temporal patterns by segmenting a time-series into patches and using the patches as tokens. The patch size controls the ability of transformers to learn the temporal patterns at different frequencies: shorter patches are effective for learning localized, high-frequency patterns, whereas mining long-term seasonalities and trends requires longer patches. Inspired by this observation, we propose a novel framework, Multi-resolution Time-Series Transformer (MTST), which consists of a multi-branch architecture for simultaneous modeling of diverse temporal patterns at different resolutions. In contrast to many existing time-series transformers, we employ relative positional encoding, which is better suited for extracting periodic components at different scales. Extensive experiments on several real-world datasets demonstrate the effectiveness of MTST in comparison to state-of-the-art forecasting techniques.

Cite

Text

Zhang et al. "Multi-Resolution Time-Series Transformer for Long-Term Forecasting." Artificial Intelligence and Statistics, 2024.

Markdown

[Zhang et al. "Multi-Resolution Time-Series Transformer for Long-Term Forecasting." Artificial Intelligence and Statistics, 2024.](https://mlanthology.org/aistats/2024/zhang2024aistats-multiresolution/)

BibTeX

@inproceedings{zhang2024aistats-multiresolution,
  title     = {{Multi-Resolution Time-Series Transformer for Long-Term Forecasting}},
  author    = {Zhang, Yitian and Ma, Liheng and Pal, Soumyasundar and Zhang, Yingxue and Coates, Mark},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2024},
  pages     = {4222-4230},
  volume    = {238},
  url       = {https://mlanthology.org/aistats/2024/zhang2024aistats-multiresolution/}
}