Self-Adaptive Extreme Penalized Loss for Imbalanced Time Series Prediction
Abstract
Pre-trained vision-language models have shown remarkable potential for downstream tasks. However, their fine-tuning under noisy labels remains an open problem due to challenges like self-confirmation bias and the limitations of conventional small-loss criteria. In this paper, we propose a unified framework to address these issues, consisting of three key steps: Screening, Rectifying, and Re-Screening. First, a dual-level semantic matching mechanism is introduced to categorize samples into clean, ambiguous, and noisy samples by leveraging both macro-level and micro-level textual prompts. Second, we design tailored pseudo-labeling strategies to rectify noisy and ambiguous labels, enabling their effective incorporation into the training process. Finally, a re-screening step, utilizing cross-validation with an auxiliary vision-language model, mitigates self-confirmation bias and enhances the robustness of the framework. Extensive experiments across ten datasets demonstrate that the proposed method significantly outperforms existing approaches for tuning vision-language pre-trained models with noisy labels.
Cite
Text
Wang et al. "Self-Adaptive Extreme Penalized Loss for Imbalanced Time Series Prediction." International Joint Conference on Artificial Intelligence, 2024. doi:10.24963/ijcai.2024/568Markdown
[Wang et al. "Self-Adaptive Extreme Penalized Loss for Imbalanced Time Series Prediction." International Joint Conference on Artificial Intelligence, 2024.](https://mlanthology.org/ijcai/2024/wang2024ijcai-self/) doi:10.24963/ijcai.2024/568BibTeX
@inproceedings{wang2024ijcai-self,
title = {{Self-Adaptive Extreme Penalized Loss for Imbalanced Time Series Prediction}},
author = {Wang, Yiyang and Han, Yuchen and Guo, Yuhan},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2024},
pages = {5135-5143},
doi = {10.24963/ijcai.2024/568},
url = {https://mlanthology.org/ijcai/2024/wang2024ijcai-self/}
}