Time Series as Images: Vision Transformer for Irregularly Sampled Time Series
Abstract
Irregularly sampled time series are becoming increasingly prevalent in various domains, especially medical applications. Although different highly-customized methods have been proposed to tackle irregularity, how to effectively model their complicated dynamics and high sparsity is still an open problem. This paper studies the problem from a whole new perspective: transforming irregularly sampled time series into line graph images and adapting powerful vision transformers to perform time series classification in the same way as image classification. Our approach largely simplifies algorithm designs without assuming prior knowledge and can be potentially extended as a general-purpose framework. Despite its simplicity, we show that it substantially outperforms state-of-the-art specialized algorithms on several popular healthcare and human activity datasets. Our code and data are available at \url{https://github.com/Leezekun/ViTST}.
Cite
Text
Li et al. "Time Series as Images: Vision Transformer for Irregularly Sampled Time Series." ICLR 2023 Workshops: TSRL4H, 2023.Markdown
[Li et al. "Time Series as Images: Vision Transformer for Irregularly Sampled Time Series." ICLR 2023 Workshops: TSRL4H, 2023.](https://mlanthology.org/iclrw/2023/li2023iclrw-time/)BibTeX
@inproceedings{li2023iclrw-time,
title = {{Time Series as Images: Vision Transformer for Irregularly Sampled Time Series}},
author = {Li, Zekun and Li, Shiyang and Yan, Xifeng},
booktitle = {ICLR 2023 Workshops: TSRL4H},
year = {2023},
url = {https://mlanthology.org/iclrw/2023/li2023iclrw-time/}
}