Learning Textual Prompts for Open-World Semi-Supervised Learning

Abstract

Traditional semi-supervised learning achieves significant success in closed-world scenarios. To better align with the openness of the real world, researchers propose open-world semi-supervised learning (OWSSL), which enables models to effectively recognize known and unknown classes even without labels for unknown classes. Recently, researchers have attempted to enhance the model performance in recognizing visually similar classes by integrating textual information. However, these attempts do not effectively align images with text, resulting in limited improvements in model performance. In response to this challenge, we propose a novel OWSSL method. By adopting a global-and-local textual prompt learning strategy to enhance image-text alignment effectiveness, and implementing a forward-and-backward strategy to reduce noise in image-text matching for unlabeled samples, we ultimately enhance the model's ability to extract and recognize discriminative features across different classes. Experimental results on multiple fine-grained datasets demonstrate that our method achieves significant performance improvements compared to state-of-the-art methods.

Cite

Text

Fan et al. "Learning Textual Prompts for Open-World Semi-Supervised Learning." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.01375

Markdown

[Fan et al. "Learning Textual Prompts for Open-World Semi-Supervised Learning." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/fan2025cvpr-learning/) doi:10.1109/CVPR52734.2025.01375

BibTeX

@inproceedings{fan2025cvpr-learning,
  title     = {{Learning Textual Prompts for Open-World Semi-Supervised Learning}},
  author    = {Fan, Yuxin and Cui, Junbiao and Liang, Jiye},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2025},
  pages     = {14756-14765},
  doi       = {10.1109/CVPR52734.2025.01375},
  url       = {https://mlanthology.org/cvpr/2025/fan2025cvpr-learning/}
}