Supervised Contrastive Few-Shot Learning for High-Frequency Time Series

Abstract

Significant progress has been made in representation learning, especially with recent success on self-supervised contrastive learning. However, for time series with less intuitive or semantic meaning, sampling bias may be inevitably encountered in unsupervised approaches. Although supervised contrastive learning has shown superior performance by leveraging label information, it may also suffer from class collapse. In this study, we consider a realistic scenario in industry with limited annotation information available. A supervised contrastive framework is developed for high-frequency time series representation and classification, wherein a novel variant of supervised contrastive loss is proposed to include multiple augmentations while induce spread within each class. Experiments on four mainstream public datasets as well as a series of sensitivity and ablation analyses demonstrate that the learned representations are effective and robust compared with the direct supervised learning and self-supervised learning, notably under the minimal few-shot situation.

Cite

Text

Chen et al. "Supervised Contrastive Few-Shot Learning for High-Frequency Time Series." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I6.25863

Markdown

[Chen et al. "Supervised Contrastive Few-Shot Learning for High-Frequency Time Series." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/chen2023aaai-supervised/) doi:10.1609/AAAI.V37I6.25863

BibTeX

@inproceedings{chen2023aaai-supervised,
  title     = {{Supervised Contrastive Few-Shot Learning for High-Frequency Time Series}},
  author    = {Chen, Xi and Ge, Cheng and Wang, Ming and Wang, Jin},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {7069-7077},
  doi       = {10.1609/AAAI.V37I6.25863},
  url       = {https://mlanthology.org/aaai/2023/chen2023aaai-supervised/}
}