TimeDiT: General-Purpose Diffusion Transformers for Time Series Foundation Model

Abstract

Time series modeling is critical for many real-world applications, but most existing approaches are task-specific. With the unique characteristics such as missing values, irregular sampling, multi-resolution and complex temporal dependencies, it is challenging to develop general foundation models for time series. In this paper, we introduce the Time Series Diffusion Transformer (TimeDiT) equipped with three distinct masking schemes designed to facilitate a uniform training and inference pipeline across various time series tasks. TimeDiT leverages the transformer architecture for capturing temporal dependencies and employs diffusion processes for generating high-quality candidate samples without stringent assumptions on the target distribution. Extensive experiments conducted on different datasets encompassing tasks such as forecasting, imputation, and anomaly detection demonstrate the model’s effectiveness. Both in-domain and zero-shot testing scenarios confirm the potential of our model to serve as a robust foundation model for multiple time series applications.

Cite

Text

Cao et al. "TimeDiT: General-Purpose Diffusion Transformers for Time Series Foundation Model." ICML 2024 Workshops: FM-Wild, 2024.

Markdown

[Cao et al. "TimeDiT: General-Purpose Diffusion Transformers for Time Series Foundation Model." ICML 2024 Workshops: FM-Wild, 2024.](https://mlanthology.org/icmlw/2024/cao2024icmlw-timedit/)

BibTeX

@inproceedings{cao2024icmlw-timedit,
  title     = {{TimeDiT: General-Purpose Diffusion Transformers for Time Series Foundation Model}},
  author    = {Cao, Defu and Ye, Wen and Liu, Yan},
  booktitle = {ICML 2024 Workshops: FM-Wild},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/cao2024icmlw-timedit/}
}