Fine-Tuning a Time Series Foundation Model with Wasserstein Loss

Abstract

Inspired by recent advancements in large language models (LLMs) for Natural Language Processing (NLP), there has been a surge in research focused on developing foundational models for time series forecasting. One approach involves training LLM architectures on tokenized time series data using cross-entropy loss. Although this method has demonstrated promising results, cross-entropy loss is primarily designed for classification tasks and does not account for the distance between classes. To address this limitation, we propose using the Wasserstein loss for such architectures. To validate our approach, we fine-tuned a foundational time series model on $22$ zero-shot datasets, comparing the performance of cross-entropy loss with that of Wasserstein loss. Our results demonstrate that replacing cross-entropy loss with Wasserstein loss significantly improves point estimation.

Cite

Text

Chernov. "Fine-Tuning a Time Series Foundation Model with Wasserstein Loss." NeurIPS 2024 Workshops: TSALM, 2024.

Markdown

[Chernov. "Fine-Tuning a Time Series Foundation Model with Wasserstein Loss." NeurIPS 2024 Workshops: TSALM, 2024.](https://mlanthology.org/neuripsw/2024/chernov2024neuripsw-finetuning/)

BibTeX

@inproceedings{chernov2024neuripsw-finetuning,
  title     = {{Fine-Tuning a Time Series Foundation Model with Wasserstein Loss}},
  author    = {Chernov, Andrei},
  booktitle = {NeurIPS 2024 Workshops: TSALM},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/chernov2024neuripsw-finetuning/}
}