OpenworldAUC: Towards Unified Evaluation and Optimization for Open-World Prompt Tuning
Abstract
Prompt tuning adapts Vision-Language Models like CLIP to open-world tasks with minimal training costs. In this direction, one typical paradigm evaluates model performance separately on known classes (i.e., base domain) and unseen classes (i.e., new domain). However, real-world scenarios require models to handle inputs without prior domain knowledge. This practical challenge has spurred the development of open-world prompt tuning, which demands a unified evaluation of two stages: 1) detecting whether an input belongs to the base or new domain (P1), and 2) classifying the sample into its correct class (P2). What’s more, as domain distributions are generally unknown, a proper metric should be insensitive to varying base/new sample ratios (P3). However, we find that current metrics, including HM, overall accuracy, and AUROC, fail to satisfy these three properties simultaneously. To bridge this gap, we propose $\mathsf{OpenworldAUC}$, a unified metric that jointly assesses detection and classification through pairwise instance comparisons. To optimize $\mathsf{OpenworldAUC}$ effectively, we introduce Gated Mixture-of-Prompts (GMoP), which employs domain-specific prompts and a gating mechanism to dynamically balance detection and classification. Theoretical guarantees ensure generalization of GMoP under practical conditions. Experiments on 15 benchmarks in open-world scenarios show GMoP achieves SOTA performance on $\mathsf{OpenworldAUC}$ and other metrics.
Cite
Text
Hua et al. "OpenworldAUC: Towards Unified Evaluation and Optimization for Open-World Prompt Tuning." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Hua et al. "OpenworldAUC: Towards Unified Evaluation and Optimization for Open-World Prompt Tuning." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/hua2025icml-openworldauc/)BibTeX
@inproceedings{hua2025icml-openworldauc,
title = {{OpenworldAUC: Towards Unified Evaluation and Optimization for Open-World Prompt Tuning}},
author = {Hua, Cong and Xu, Qianqian and Yang, Zhiyong and Wang, Zitai and Bao, Shilong and Huang, Qingming},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {24975-25020},
volume = {267},
url = {https://mlanthology.org/icml/2025/hua2025icml-openworldauc/}
}