Leveraging First and Zeroth-Order Gradient to Address Imbalanced Black-Box Prompt Tuning via Minimax Optimization
Abstract
Black-box prompt tuning has become a prevalent parameter-efficient paradigm that leverages the capabilities of large language models (LLMs) for customized applications in specific downstream tasks. In practical scenarios, downstream tasks frequently involve data distributions that are heavily imbalanced. Such imbalances tend to impair performance, causing severe performance collapse in minority classes. Conducting effective imbalanced black-box prompt tuning to mitigate the adverse effects of imbalanced data distribution on prompt performance remains a significant challenge. In this paper, we propose black-box prompt tuning with first and zeroth order gradient (BPT-FZG) for handling the imbalanced data. Specifically, BPT-FZG introduces AUC maximization as the objective for prompt tuning and equivalently formulates it as a nonconvex-concave saddle point problem to avoid the construction of sample pairs from opposite classes. Indeed, BPT-FZG optimizes the latent representation of the continuous prompt in the low-dimensional subspace with AUC loss and leverages the first and zeroth order gradients alternately to update the parameters. Furthermore, we establish the theoretical convergence guarantee for BPT-FZG under common assumptions, showing that our method can find a stationary point of the objective function. Our experiments on RoBERTa-large, GPT2-XL, and Llama3 show that BPT-FZG achieves improvement on various imbalanced datasets, emphasizing the effectiveness of our methods.
Cite
Text
Zhang et al. "Leveraging First and Zeroth-Order Gradient to Address Imbalanced Black-Box Prompt Tuning via Minimax Optimization." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I21.34397Markdown
[Zhang et al. "Leveraging First and Zeroth-Order Gradient to Address Imbalanced Black-Box Prompt Tuning via Minimax Optimization." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/zhang2025aaai-leveraging-a/) doi:10.1609/AAAI.V39I21.34397BibTeX
@inproceedings{zhang2025aaai-leveraging-a,
title = {{Leveraging First and Zeroth-Order Gradient to Address Imbalanced Black-Box Prompt Tuning via Minimax Optimization}},
author = {Zhang, Haozhen and Liu, Zhaogeng and Gu, Bin and Chang, Yi},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {22407-22415},
doi = {10.1609/AAAI.V39I21.34397},
url = {https://mlanthology.org/aaai/2025/zhang2025aaai-leveraging-a/}
}