Align and Fine-Tune: Enhancing LLMs for Time-Series Forecasting

Abstract

Multivariate time-series forecasting is vital in fields like economic planning and weather prediction, but deep models often require large datasets, limiting their practicality. Pre-trained Large Language Models (LLMs) have been adapted for time-series tasks, but challenges persist due to differences between time-series and linguistic data, and the need for multi-scale temporal processing. To address these challenges, we introduce LLM4TS, a framework that leverages LLMs for time-series forecasting through a two-stage fine-tuning process: *time-series alignment* to adapt LLMs to time-series data and *forecasting fine-tuning* for specific tasks. A novel two-level aggregation method integrates multi-scale temporal data within LLMs. Experiments show that LLM4TS outperforms state-of-the-art methods, excelling in both full-shot and few-shot scenarios. Comparisons with other unsupervised approaches highlight LLM4TS's superior representation learning.

Cite

Text

Chang et al. "Align and Fine-Tune: Enhancing LLMs for Time-Series Forecasting." NeurIPS 2024 Workshops: TSALM, 2024.

Markdown

[Chang et al. "Align and Fine-Tune: Enhancing LLMs for Time-Series Forecasting." NeurIPS 2024 Workshops: TSALM, 2024.](https://mlanthology.org/neuripsw/2024/chang2024neuripsw-align/)

BibTeX

@inproceedings{chang2024neuripsw-align,
  title     = {{Align and Fine-Tune: Enhancing LLMs for Time-Series Forecasting}},
  author    = {Chang, Ching and Wang, Wei-Yao and Peng, Wen-Chih and Chen, Tien-Fu and Samtani, Sagar},
  booktitle = {NeurIPS 2024 Workshops: TSALM},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/chang2024neuripsw-align/}
}