A Language Model-Guided Framework for Mining Time Series with Distributional Shifts

Abstract

Effective utilization of time series data is often constrained by the scarcity of data quantity that reflects complex dynamics, especially under distributional shifts. This paper presents an approach that utilizes large language models and data source software interfaces to collect time series datasets. This approach enlarges the data quantity and diversity when the original data is limited or lacks essential properties. We demonstrate the effectiveness of the collected datasets through utility examples and show how time series forecasting foundation models fine-tuned on these datasets achieve better performance than those without fine-tuning.

Cite

Text

Zhu et al. "A Language Model-Guided Framework for Mining Time Series with Distributional Shifts." NeurIPS 2024 Workshops: TSALM, 2024.

Markdown

[Zhu et al. "A Language Model-Guided Framework for Mining Time Series with Distributional Shifts." NeurIPS 2024 Workshops: TSALM, 2024.](https://mlanthology.org/neuripsw/2024/zhu2024neuripsw-language-a/)

BibTeX

@inproceedings{zhu2024neuripsw-language-a,
  title     = {{A Language Model-Guided Framework for Mining Time Series with Distributional Shifts}},
  author    = {Zhu, Haibei and El-Laham, Yousef and Fons, Elizabeth and Vyetrenko, Svitlana},
  booktitle = {NeurIPS 2024 Workshops: TSALM},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/zhu2024neuripsw-language-a/}
}