A Time Series Is Worth 64 Words: Long-Term Forecasting with Transformers
Abstract
We propose an efficient design of Transformer-based models for multivariate time series forecasting and self-supervised representation learning. It is based on two key components: (i) segmentation of time series into subseries-level patches which are served as input tokens to Transformer; (ii) channel-independence where each channel contains a single univariate time series that shares the same embedding and Transformer weights across all the series. Patching design naturally has three-fold benefit: local semantic information is retained in the embedding; computation and memory usage of the attention maps are quadratically reduced given the same look-back window; and the model can attend longer history. Our channel-independent patch time series Transformer (PatchTST) can improve the long-term forecasting accuracy significantly when compared with that of SOTA Transformer-based models. We also apply our model to self-supervised pre-training tasks and attain excellent fine-tuning performance, which outperforms supervised training on large datasets. Transferring of masked pre-training performed on one dataset to other datasets also produces SOTA forecasting accuracy.
Cite
Text
Nie et al. "A Time Series Is Worth 64 Words: Long-Term Forecasting with Transformers." International Conference on Learning Representations, 2023.Markdown
[Nie et al. "A Time Series Is Worth 64 Words: Long-Term Forecasting with Transformers." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/nie2023iclr-time/)BibTeX
@inproceedings{nie2023iclr-time,
title = {{A Time Series Is Worth 64 Words: Long-Term Forecasting with Transformers}},
author = {Nie, Yuqi and Nguyen, Nam H and Sinthong, Phanwadee and Kalagnanam, Jayant},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/nie2023iclr-time/}
}