Large Language Model Guided Knowledge Distillation for Time Series Anomaly Detection

Abstract

Recently, adapting Vision Language Models (VLMs) to zero-shot visual classification by tuning class embedding with a few prompts (Test-time Prompt Tuning, TPT) or replacing class names with generated visual samples (support-set) has shown promising results. However, TPT cannot avoid the semantic gap between modalities while the support-set cannot be tuned. To this end, we draw on each other's strengths and propose a novel framework, namely TEst-time Support-set Tuning for zero-shot Video Classification (TEST-V). It first dilates the support-set with multiple prompts (Multi-prompting Support-set Dilation, MSD) and then erodes the support-set via learnable weights to mine key cues dynamically (Temporal-aware Support-set Erosion, TSE). Specifically, i) MSD expands the support samples for each class based on multiple prompts inquired from LLMs to enrich the diversity of the support-set. ii) TSE tunes the support-set with factorized learnable weights according to the temporal prediction consistency in a self-supervised manner to dig pivotal supporting cues for each class. TEST-V achieves state-of-the-art results across four benchmarks and shows good interpretability.

Cite

Text

Liu et al. "Large Language Model Guided Knowledge Distillation for Time Series Anomaly Detection." International Joint Conference on Artificial Intelligence, 2024. doi:10.24963/ijcai.2024/239

Markdown

[Liu et al. "Large Language Model Guided Knowledge Distillation for Time Series Anomaly Detection." International Joint Conference on Artificial Intelligence, 2024.](https://mlanthology.org/ijcai/2024/liu2024ijcai-large/) doi:10.24963/ijcai.2024/239

BibTeX

@inproceedings{liu2024ijcai-large,
  title     = {{Large Language Model Guided Knowledge Distillation for Time Series Anomaly Detection}},
  author    = {Liu, Chen and He, Shibo and Zhou, Qihang and Li, Shizhong and Meng, Wenchao},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {2162-2170},
  doi       = {10.24963/ijcai.2024/239},
  url       = {https://mlanthology.org/ijcai/2024/liu2024ijcai-large/}
}