Towards Time-Series Reasoning with LLMs
Abstract
Multi-modal large language models (MLLMs) have enabled numerous advances in understanding and reasoning in domains like vision, but we have not yet seen this broad success for time-series. Although prior works on time-series MLLMs have shown promising performance in time-series forecasting, very few works show how an LLM could be used for time-series reasoning in natural language. We propose a novel multi-modal time-series LLM approach that learns generalizable information across various domains with powerful zero-shot performance. First, we train a lightweight time-series encoder on top of an LLM to directly extract time-series information. Then, we fine-tune our model with chain-of-thought augmented time-series tasks to encourage the model to generate reasoning paths. We show that our model learns a latent representation that reflects specific time-series features (e.g. slope, frequency), as well as outperforming GPT-4o on a set of zero-shot reasoning tasks on a variety of domains.
Cite
Text
Chow et al. "Towards Time-Series Reasoning with LLMs." NeurIPS 2024 Workshops: TSALM, 2024.Markdown
[Chow et al. "Towards Time-Series Reasoning with LLMs." NeurIPS 2024 Workshops: TSALM, 2024.](https://mlanthology.org/neuripsw/2024/chow2024neuripsw-timeseries/)BibTeX
@inproceedings{chow2024neuripsw-timeseries,
title = {{Towards Time-Series Reasoning with LLMs}},
author = {Chow, Winnie and Gardiner, Lauren E. and Hallgrimsson, Haraldur T and Xu, Maxwell A and Ren, Shirley You},
booktitle = {NeurIPS 2024 Workshops: TSALM},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/chow2024neuripsw-timeseries/}
}