LETS-C: Leveraging Text Embedding for Time Series Classification
Abstract
Recent advancements in language modeling have shown promising results in time series data analysis, with fine-tuning pre-trained large language models (LLMs) achieving state-of-the-art (SOTA) performance on standard benchmarks. However, LLMs require millions of trainable parameters, presenting a significant drawback due to their large size. We propose an alternative approach to leveraging the success of language modeling in the time series domain. Instead of fine-tuning LLMs, we utilize a text embedding model to embed time series and then pair the embeddings with a simple classification head composed of convolutional neural networks and multilayer perceptron. We conducted extensive experiments on a well-established time series classification benchmark. We demonstrated LETS-C not only outperforms the current SOTA in classification accuracy but also offers a lightweight solution, using only 14.5% of the trainable parameters compared to the SOTA model. Our findings suggest that leveraging text embedding models to encode time series data, combined with a simple yet effective classification head, offers a promising direction for achieving high-performance time series classification while maintaining a lightweight model architecture.
Cite
Text
Kaur et al. "LETS-C: Leveraging Text Embedding for Time Series Classification." NeurIPS 2024 Workshops: TSALM, 2024.Markdown
[Kaur et al. "LETS-C: Leveraging Text Embedding for Time Series Classification." NeurIPS 2024 Workshops: TSALM, 2024.](https://mlanthology.org/neuripsw/2024/kaur2024neuripsw-letsc/)BibTeX
@inproceedings{kaur2024neuripsw-letsc,
title = {{LETS-C: Leveraging Text Embedding for Time Series Classification}},
author = {Kaur, Rachneet and Zeng, Zhen and Balch, Tucker and Veloso, Manuela},
booktitle = {NeurIPS 2024 Workshops: TSALM},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/kaur2024neuripsw-letsc/}
}